0% found this document useful (0 votes)
130 views

AI & ML DIGITAL NOTES

AI ML NOTES
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
0% found this document useful (0 votes)
130 views

AI & ML DIGITAL NOTES

AI ML NOTES
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 177
ARTIFICIAL INTELLIGENCE & MACHINE LEARNING Lecture Notes B.TECH (Ill YEAR - II SEM) (2022-2023) Prepared by: Ms.Anitha Patibandla, Associate Professor Dr.B Jyothi, Professor Ms.K.Bhavana, Assistant Professor MALLA REDDY COLLEGE OF ENGINEERING & TECHNOLOGY (Autonomous Institution - UGC, Govt. of India) Department of Electronics and Communication Engineering Recognized under 2{f) and 12 (B) of UGC ACT 1956 (Afiiated to JNTUH, Hyderabad, Approved by AICTE-Accredited by NBA 8NAAC~'A'Grade-1S09001:2015 Certtied) Maisammaguda, Dhulapall y(PostVia Kompally), Secunderabad-500 100, Telangana State, India AL CITADELOBLEARNING, © seamed th oeScanerMALLA REDDY COLLEGE OF ENGINEERING AND TECHNOLOGY Ill Year B.Tech. ECE- | Sem U/T/P/C 3/-/-13 (R20A05XXX) ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING COURSE OBJECTIVES: 1. To train the students to understand different types of Al agents. 2. To understand various Al search algorithms. 3. Fundamentals of knowledge representation, building of simple knowledge-based systems and to apply knowledge representation. 4, To introduce the basic concepts and techniques of machine learning and the need for Machine learning techniques for real world problem, 5. To provide understanding of various Machine learning algorithms and the way to evaluate the performance of ML algorithms UNIT -I: Introduction: Al problems, Agents and Environments, Structure of Agents, Problem Solving Agents Basic Search Strategies: Problem Spaces, Uninformed Search (Breadth-First, Depth-First Search, Depth-first with Iterative Deepening), Heuristic Search (Hill Climbing, Generic Best-First, ‘A*), Constraint Satisfaction (Backtracking, Local Search) UNIT- II: Advanced Search: Constructing Search Trees, Stochastic Search, AO* Search Implementation, Minimax Search, Alpha-Beta Pruning Basic Knowledge Representation and Reasoning: Propositional Logic, First-Order Logic, Forward Chaining and Backward Chaining, Introduction to Probabilistic Reasoning, Bayes Theorem UNIT - I Machine-Learning : Introduction. Machine Learning Systems, Forms of Learning: Supervised and Unsupervised Learning, reinforcement ~ theory of learning ~ feasibility of learning — Data Preparation— training versus testing and split. UNIT-1 Supervised Learnin, Regression: Linear Regression, multi linear regression, Polynomial Regression, logistic regression, Non-linear Regression, Model evaluation methods. Classification: - support vector machines ( SVM) , Naive Bayes classification © scanes win one scanerUNIT-V: Unsupervised learning Nearest neighbor models ~ k-means ~ clustering around medoids ~ silhouettes ~ hierarchical clustering — k-d trees ,Clustering trees — learning ordered rule lists — learning unordered rule Reinforcement learning- Example: Getting Lost -State and Action Spaces ‘TEXT BOOKS: 1. Russell, S. and Norvig, P, Arti PrenticeHall, 2010. 2. MACHINE LEARNING An Algorithmic Perspective 2" Edition,Stephen Marsland,2015, by Taylor & Francis Group, LLC 3.Introduction to Machine Learning ,The Wikipedia Guide ial Intelligence: A Modern Approach, Third Edition, REFERENCES: 1. Artificial Intelligence, Elaine Rich, Kevin Knight, Shivasankar B. Nair, The McGraw Hill publications, Third Edition, 2009. 2. George F. Luger, 2. Artificial Intelligence: Structures and Strategies for Complex Problem Solving, Pearson Education, 6th ed., 2003. 3. Introduction to Machine Learning, Second Edition, Ethem Alpaydin, the MIT Press, Cambridge, Massachusetts, London, England. 4, Machine Learning , Tom M. Mitchell, McGraw-Hill Science, ISBN: 0070428077 5. Understanding Machine Learning:From Theory to Algorithms, c 2014 by ShaiShalev- Shwartz and Shai Ben-David, Published 2014 by Cambridge University Press. COURSE OUTCOMES: 1. Understand the informed and uninformed problem types and apply search strategies to solve them. 2. Apply difficult real life problems in a state space representation so as to solve those using Al techniques like searching and game playing. Apply machine learning techniques in the design of computer systems To differentiate between various categories of ML algorithms 5, Design and make modifications to existing machine learning algorithms. aw © scanes win one scanerIntroduction: AI problems, Agents and Environments, Structure of Agents, Problem Solving Agents Basie Search Strategies: Problem Spaces, Uninformed Search (Breadth First, Depth- First Search, Depth-first with Iterative Deepening), Heuristic Search (Hill Climbing, Generic Best-First, A*), Constraint Satisfaction (Backtracking, Local Search) Introductior Artificial Intelligence is concerned with the design of intelligence in an artificial device. The term was coined by John McCarthy in 1956. Intelligence is the ability to acquire, understand and apply the knowledge to achieve goalsin the world. Alis the study of the mental faculties through the use of computational models Alis the study of intellectual/mental processes as computational processes. Al program will demonstrate a high level of intelligence to a degree that equals orexceeds the intelligence required of a human in performing some task. Alis unique, sharing borders with Mathematics, Computer Science, Philosophy, Psychology, Biology, Cognitive Science and many others. Although there is no clear definition of Al or even Intelligence, it can be described as an attempt to build machines that like humans can think and act, able to learn and use knowledge to solve problems on their own. Sub Areas of Al: 1) Game Playing Deep Blue Chess program beat world champion Gary Kasparov 2) Speech Recognition PEGASUS spoken language interface to American Airlines’ EASY SABRE reservation system, which allows users to obtain flight information and make reservations over the © scone wine scartelephone. The 1990s has seen significant advances in speech recogni limitedsystems are now successful. Computer Vision Face recognition programs in use by banks, government, etc. The ALVIN system from CMU autonomously drove a van from Washington, D.C. to San Diego (all but 52 of 2,849 miles), averaging 63 mph day and night, and in all weather conditions. Handwriting recognition, electronics and manufacturing inspection, photo interpretation, baggage inspection, reverse engineering to automatically construct a 3D geometric model. Expert Systems Application-specific systems that rely on obtaining the knowledge of human experts in anarea and programming that knowledge into a system. a. Diagnostic Systems: MYCIN system for diagnosing bacterial infections of the blood and suggesting treatments. Intellipath pathology diagnosis system (AMA approved). Pathfinder medical diagnosis system, which suggests tests and makesdiagnoses. Whirlpool customer assistance center. System Configuration DEC's XCON system for custom hardware configuration. Radiotherapy treatment planning Financial Decision Making Credit card companies, mortgage companies, banks, and the U.S, governmentemploy Al systems to detect fraud and expedite financial transactions. For example, AMEX credit check. Classification Systems Put information into one of a fixed set of categories using several sources of information. E.g., financial decision making systems, NASA developed a system for classifying very faint areas in astronomical images into either stars or galaxies with very high accuracy by learning from human experts’ classifications. 5) Mathematical Theorem Proving Use inference methods to prove new theorems, 6) Natural Language Understanding AltaVista's translation of web pages. Translation of Catepillar Truck manuals into 20 languages. © scone wih one came7) Scheduling and Planning Automatic scheduling for manufacturing, DARPA's DART system used in Desert Storm and Desert Shield operations to plan logistics of people and supplies. American Airlines rerouting contingency planner. European space agency planning and scheduling of spacecraft assembly,integration and verification. 8) Artificial Neural Networks: 9) Machine Learning Applications of Al: Alalgorithms have attracted close attention of researchers and have also been applied successfullyto solve problems in engineering. Nevertheless, for large and complex problems, Al algorithms consume considerable computation time due to stochastic feature of the search approaches Business; financial strategies Engineering: check design, offer suggestions to create new product, expertsystems for allengineering problems Manufacturing: assembly, inspection and maintenance Medicine: monitoring, diagnosing Education: in teaching Fraud detection Object identification Information retrieval Space shuttle scheduling Building Al Systems: 1) Perception Intelligent biological systems are physically embodied in the world and experience the world through their sensors (senses). For an autonomous vehicle, input might be images from a camera and range information from a rangefinder. For 2 medical diagnosis 3 © scone wih one camesystem,perception is the set of symptoms and test results that have been obtained and input to thesystem manually. 2) Reasoning Inference, decision-making, classification from what is sensed and what the internal "model" is of the world. Might be a neural network, logical deduction system, Hidden Markov Model induction,heuristic searching a problem space, Bayes Network inference, genetic algorithms, etc. Includes areas of knowledge representation, problem solving, decision theory, planning, game theory, machine learning, uncertainty reasoning, etc. 3)Action Biological systems interact within their environment by actuation, speech, etc. All behavior is centered around actions in the world. Examples include controlling the steering of a Mars rover or autonomous vehicle, or suggesting tests and making diagnoses for a medical diagnosis system. Includes areas of robot actuation, natural language generation, and speech synthesis. The definitions of Al: a) "The exciting new effort to make b) "The study of mental faculties computers think... machines with through the use of computational ‘minds, in the full and literal sense" models" (Charniak and McDermott, (Haugeland, 1985) 1985) "The automation of] activities that we "The study of the computations associate with human thinking, activities that make it possible to perceive, such as decision-making, problem solving, reason,and act" (Winston, 1992) learning..."(Bellman, 1978) © scone wih one came€) "The art of creating machines that 4d) "A field of study that seeks to explain performfunctions that require and emulate intelligent behavior in intelligence when performed by people" terms of computational processes" (Kurzweil, 1990) (Schalkoff, 1 990) “The branch of computer science “The study of how to make that is concerned with the computersdo things at which, at the automation of intelligent moment, people are better" (Rich behavior" and Knight, 1 (Luger and Stubblefield, 1993) 991) ‘The definitions on the top, (a) and (b) are concerned with reasoning, whereas those on the bottom, (e) and (d) address behavior. The definitions on the left, (a) and (¢) measure success interms of human performance, and those on the right, (b) and (d) measure the ideal concept of intelligence called rationality Intelligent Systems In order to design intelligent systems, itis important to categorize them into four categories(Luger and Stubberfield 1993), (Russell and Norvig, 2003) 1. Systems that think like humans 2, Systems that think rationally 3. Systems that behave like humans Systems that behave rationally Human- Rationall Like y Cognitive Science Approach Laws of thought Approach “Machines that think like humans” “Machines that think Rationally” Turing Test Approach Rational Agent Approach “Machines that behave like humans” “Machines that behave Rationally” © scone wine scarCognitive Science: Think Human-Like a. Requires a model for human cognition. Precise enough models allowsimulation by computers. b. Focus is not just on behavior and 1/0, but looks like reasoning process. ©. Goal is not just to produce human-like behavior but to produce a sequence of steps of thereasoning process, similar to the steps followed by a human in solving the same task. Laws of thought: Think Rationally The study of mental faculties through the use of computational models; that it is, thestudy of computations that make it possible to perceive reason and act. Focus is on inference mechanisms that are probably correct and guarantee an optimal solution. Goal is to formalize the reasoning process as a system of logical rules and procedures ofinference. Develop systems of representation to allow inferences to be like “Socrates is a man, All men are mortal. Therefore Socrates is mortal” Turing Test: Act Human-Like The art of creating machines that perform functions requiring intelligence when performed by people; that it is the study of, how to make computers do things which, at the moment, people do better. Focus is on action, and not intelligent behavior centered around the representation of the world Example: Turing Test © 3 rooms contain: a person, a computer and an interrogator. The interrogator can communicate with the other 2 by teletype (to avoidthe machine imitate the appearance of voice of the person) The interrogator tries to determine which the person is and which themachine is. © scone wih one came© The machine tries to fool the interrogator to believe that it is the human, and the person also tries to convince the interrogator that it is the human. If the machine succeeds in fooling the interrogator, then conclude that themachine is intelligent. Rational agent: Act Rationally a. Tries to explain and emulate intelligent behavior in terms of comput: thatit is concerned with the automation of the intel b. Focus is on systems that act sufficiently if not optimally in all situations. ¢. Goal is to develop systems that are rational and sufficient Agents and Environments: actuators. Fig 2.1: Agents and Environments Agent: An Agent is anything that can be viewed as perceiving its environment through sensors andacting upon that environment through actuators. ¥ Ahuman agent has eyes, ears, and other organs for sensors and hands, legs, mouth,and other body parts for actuators. Y Arobotic agent might have cameras and infrared range finders for sensors andvarious motors foractuators. Asoftware agent receives keystrokes, file contents, and network packets as 7 © scone wih one camesensoryinputs and acts on the environment by displaying on the screen, wri files, and sending network packets. Percept: We use the term percept to refer to the agent's perceptual inputs at any given instant. Percept Sequence: An agent's percept sequence is the complete history of everything the agent has ever perceived. Agent function: Mathematically speaking, we say that an agent's behavior is described by the agent functionthat maps any given percept sequence to an action. ‘Agent program Internally, the agent function for an artificial agent will be implemented by an agent program. It is important to keep these two ideas distinct. The agent function is an abstract mathematical description; the agent program is a concrete implementation, running on theagent architecture. To illustrate these ideas, we will use a very simple example-the vacuum-cleaner world shown in Fig 2.1.5. This particular world has just two locations: squares A and B. The vacuum agent perceives which square it is in and whether there is dirt in the square. It can choose to move left, move right, suck up the dirt, or do nothing. One very simple agent function is the following: ifthe current square is dirty, then suck, otherwise move to the other square. A partial tabulation ofthis agent function is shown in Fig 2.1.6. Fig 2.1.5: A vacuum-cleaner world with just two locations. 8 © scone wih one cameAgent function Percept Sequence [A, Clean] [A, Dirty] [B, Clean} [B, Dirty} TA, Clean], [A, Clean] TA, Clean], [A, Dirty] Fig 2.1.6: Partial tabulation of a simple agent function for the example: vacuum-cleaner world shown in the Function REFLEX-VACCUM-AGENT ([location, status]) returns an action If status=Dirty then return Sucie else if location = A then return Right else if location = B then return Left Fig 2.1.6(i): The REFLEX-VACCUM-AGENT program is invoked for each new percept (location, status) and returns an action each time ‘+ A Rational agent is one that does the right thing. we say that the right action is the one that willcause the agent to be most successful. That leaves us with the problem of deciding how and when to evaluate the agent's success. We use the term performance measure for the how—the criteria that determine how successfulan agent is. © scone wih one cameY BeAgent cleaning the dirty floor ¥ Performance Measure-Amount of dirt collected Y When to measure-Weekly for better results rational at any given time depends on four things: The performance measure defining the criterion of success The agent's prior knowledge of the environment The actions that the agent can perform ‘The agent’s percept sequence up to now. Omniscience ,Learning and Autonomy: > We need to distinguish between rationality and omniscience. An Omniscient agent knows theactual outcome of its actions and can act accordingly but omniscience is impossible in reality. Rational agent not only gathers information but also learns as much as possible from what itperceives. If an agent just relies on the prior knowledge of its designer rather than its own percepts thenthe agent lacks autonomy. A system is autonomous to the extent that its behavior is determined its own experience. A rational agent should be autonomous. Exg., a clock(lacks autonomy) No input (percepts) Run only but its own algorithm (prior knowledge) No learning, no experience, etc. ENVIRONMENTS: The Performance measure, the environment and the agents actuators and sensors comes under the heading task environment. We also call this as PEAS(Performance,Environment,Actuators,Sensors) © scans win on SenerOther PEAS Examples Performance fe Environment Actuators Sensors Agent Type display questions, | keyboard entry healthy patient, | patient, tests, diagnoses, | of symptoms, costs, lawsuits | hospital, stuff | treatments, findings, referrals patient's answers Medical diagnosis system Satellite image analysis, system display categorization of scene correct image | downlink from categorization | orbiting satellite color pixel percentage of parts in correct bins conveyor belt. | jointed arm camera, joint with parts, bins | and hand angle sensors Part-picking robot temperature, pressure chemical sensors Refinery purity, yield refinery valves pumps controller safety operators heaters displays display exercises, suggestions, keyboard entry corrections Interactive student's score | set of students, English tutor || on test testing agency Environment-Types: 1, Accessible vs. inaccessible or Fully observable vs Partially Observable: If an agent sensor can sense or access the complete state of an environment at each point of timethen it is a fully observable environment, else it is partially observable. 2. Deterministic vs. Stochastic: if the next state of the environment is completely determined by the current state and the actionsselected by the agents, then we say the environment is deterministic 3. Episodic vs. nonepisodic: > The agent's experience is divided into "episodes." Each episode consists of the agent perceiving and then acting. The quality of its action depends just on the episode itself, becausesubsequent episodes do not depend on what actions occur in previous episodes, > Episodic environments are much simpler because the agent does not need to think ahead. 4, Static vs. dynamic. Ifthe environment can change while an agent is deliberating, then we say the environment isdynamic for that agent; otherwise it is static. © scanes win one scanerDiscrete vs. continuous: If there are a limited number of distinct, clearly defined percepts and actions we saythat the environment is discrete. Otherwise, it is continuous. Examples of task environments STRUCTURE OF INTELLIGENT AGENTS > The job of Al is to design the agent program: a function that implements the agent mapping from percepts to actions. We assume this program will run on some sort of ARCHITECTURE ‘computing device, which we will call the architecture. The architecture might be a plain computer, or it might include special-purpose hardware for certain tasks, such as processing camera images or filtering audio input. It might also include software that provides a degree of insulation between the raw computer and the agent program,so that we can program at a higher level. In general, the architecture makes the percepts from the sensors available to the program, runs the program, and feeds the program's action choices to the effectors as they are generated. The relationship among agents, architectures, and programs can be summed up as follows:agent = architecture + program © scone wih one cameAgent Type || Pereepts Actions Environment Medical dagnasis | symptoms, Questions. tess, | Healthy paint, | Patent, hospital sytem findings, treatments minimize ons Satelite image |) Pines of varying | Prints Comet Images frm analysis system || iensiyncolor | categorization | categorization | orbing satelite Patpicking cabot | Pixels of varying Place pars in Conveyor elt intensity somes with pars Refinery conrler || Temperature, Open, close Maximize purty, | Refinery presurereadings | valves: adjust eld safety temperature Interactive English Pritexercces, | Maximize Set otadente tutor suggestions, students sore on Figure 23 Examples of agent types and their PAGE descriptions ‘Agent programs: > Intelligent agents accept percepts from an environment and generates actions. The earlyversions of agent programs will have a very simple form (Figure 2.4) > Each will use some internal data structures that will be updated as new percepts arrive. > These data structures are operated on by the agent's decision-making procedures to generate anaction choice, which is then passed to the architecture to be executed | function TABLé-DRIVEN-AGENT(percept)returns action sate: percepts, a Sequence, initially empty abl, a table, indexed by percept sequences, initially fully specifi appemdpercept the end olpercepis action — LOOKUR perceprsable) return action Figure 25 An agent based on a prespecified lookup table. 1 Keeps tack of the percept sequence andjust looks up the Best action, Types of agents: Agents can be grouped into four classes based on their degree of perceived intelligence and capability > Simple Reflex Agents 13 © scanes win one scaner> — Model-Based Reflex Agents > — Goal-Based Agents > Utility-Based Agents ‘Simple reflex agents: Simple reflex agents ignore the rest of the percept history and act only on the basis ofthe current percept. The agent function is based on the condition-action rule. If the condition is true, then the action is taken, else not. This agent function only succeeds whenthe environment is fully observable. (oO pgent Segara EPS ection rales | Model-based reflex agents: > The Model-based agent can work in a partially observable environment, and track the situation > Amodel-based agent has two important factors: > Model: It is knowledge about "how things happen in the world," so it is called a Model-based agent. > Internal State: It is a representation of the current state based on percept history. Actuators © scone wine scarGoal-based agents: A goal-based agent has an agenda. It operates based on a goal in front of it and makes decisions based on how best to reach that goal. A goal-based agent operates as a search and planning function, meaning it targets the goal ahead andfinds the right action in order to reach it. Expansion of model-based agent. sensors Utility-based agents: > Autility-based agent is an agent that acts based not only on what the goal is, but the best way to reach thatgoal. The Utility-based agent is useful when there are multiple possible alternatives, and an agent has tochoose in order to perform the best action. The term utility can be used to describe how "happy" the agent is. © scone wih one camePrecepts should do now Actuators Problem Solving Agents: > Problem solving agent is a goal-based agent. > Problem solving agents decide what to do by finding sequence of actions that lead to desirable states! Goal Formulation: It organizes the steps required to formulate/ prepare one goal out of multiple goals available. Problem Formulation It is a process of deciding what actions and states to consider to follow goal formulation.The process of looking for a best sequence to achieve a goal is called Search. Asearch algorithm takes a problem as input and returns a solution in the form of action sequences. Once the solution is found the action it recommends can be carried out. This is called Execution phase.Well Defined problems and solutions: A problem can be defined formally by 4 components: > The initial state of the agent is the state where the agent starts in. In this case, the initial state can be| Te © scone wine scardescribed as In: Arad > The possible actions available to the agent, corresponding to each of the state the agent residesin. For example, ACTIONS(In: Arad) = (Go: Sibiu, Go: Timisoara, Go: Zerind}. ‘Actions are also known as operations. > Adescription of what each action does.the formal name for this is Transition model,Specified by thefunction Result(s,a) that returns the state that results from the action a in state s. We also use the term Successor to refer to any state reachable from a given state by a single action.For EX:Result(In(Arad),GO(Ze! wm a ~~ \\ sae Together the initial state,actions and transition model implicitly defines the state space of the problemstate space: set of all states reachable from the initial state by any sequence of actions > The goal test, determining whether the current state is a goal state. Here, the goal state is {In:Bucharest} > The path cost function, which determine the cost of each path, which is reflecting in theperformance measure. we define the cost function as c(s, a, s’), where s is the current state and a is the action performed by theagent to reach state s’. © scone wine scarfuntion Sir Prot SoLviNe-AGES inputs: pope stn: san action squenentllyempry problem a problen formation ste — Urnare Star stat, its is empry then problem —FoRMULAT-PRORLEM6 9 sto — RecnMeNDATION, se S$ REMAINDE, tate) Figure 3.1 A simple problem-solving agen Example — 8 puzzle problem Initial State 8 6 4 6 fs States: a state description specifies the location of each of the eight tiles in one of the ninesquares. For efficiency, it is useful to include the location of the blank. Actions: blank moves left, right, up, or down Transition Model: Given a state and action, this returns the resulting state. For example if weapply left to the start state the resulting state has the S and the blank switched. Goal test: state matches the goal configuration shown in fig. Path cost: each step costs 1, so the path cost is just the length of the path. 18 © scone wih one cameState Space Search/Problem Space Search: The state space representation forms the basis of most of the Al methods. Formulate a problem as a state space search by showing the legal problem states, the legal operators, and the initial and goal states. A state is defined by the specification of the values of all attributes of interest in the world An operator changes one state into the other; it has a precondition which is the value of certain attributes prior to the application of the operator, and a set of effects, which arethe attributes altered by the operator The initial state is where you start The goal state is the partial description of the solution Formal Description of the problem: 1. Define a state space that contains all the possible configurations of the relevant objects 2. Specify one or more states within that space that describe possible situations from whichthe problem solving process may start ( initial state) 3. Specify one or more states that would be acceptable as solutions to the problem. ( goal states) Specify a set of rules that describe the actions (operations) available State-Space Problem Formulati Example: A problem is defined by four items: ial state e.g,, "at Arad” actions or successor function : —_S(x)=set of action-state pairse.g., S(Arad) = { Zerind, Zerind>, goal test (or set of goal states) eg., x= "at Bucharest”, Checkmate(x) path cost (additive) e.g., sum of distances, number of actions executed, etc, ¢(x,0,y) is the step cost, assumed to be 20 A solution is a sequence of actions leading from the initial state to a goal state © scone wih one cameExample: 8-queens problem Initial State: Any arrangement of 0 to 8 queens on board. Operators: add a queen to any square Goal Test: 8 queens on board, none attacked. Path cost: not applicable or Zero (because only the final state counts, search cost, mightbe of interest). Search strategies: Search: Searching is a step by step procedure to solve a search-problem in a given search space. A search problem can have three main factors: Search Space: Search space represents a set of possible solutions, which a system may have.Start State: It is a state from where agent begins the search. Goal test: It is a function which observe the current state and returns whether the goal state is achievedor not. © scanes win one scanerProperties of Search Algorithms Which search algorithm one should use will generally depend on the problemdomain. There are four important factors to consider: 11 Completeness ~ Is a solution guaranteed to be found if at least one solution exists? 2. Optimality — Is the solution found guaranteed to be the best (or lowest cost) solution if thereexists more than one solution? 3. Time Complexity - The upper bound on the time required to find a solution, as a function ofthe complexity of the problem. 4, Space Complexity — The upper bound on the storage space (memory) required at any pointduring the search, as a function of the complexity of the problem. State Spaces versus Search Trees: + State Space ©. Set of valid states for a problem © Linked by operators © e.., 20 valid states (cities) in the Romanian travel problem + Search Tree - Root node = initial state Child nodes = states that can be visited from parent Note that the depth of the tree can be infinite + Eg, via repeated states Partial search tree + Portion of tree that has been expanded so far Fringe + Leaves of partial search tree, candidates forexpansion Search trees = data structure to search state- space Searching ‘Many traditional search algorithms are used in Al applications. For complex problems, the traditional algorithms are unable to find the solution within some practical time and space limits. Consequently, ‘many special techniques are developed; using heuristic functions. The algorithms that use heuristic 2 © scone wih one camefunctions are called heuristic algorithms. Heuristic algorithms are not really intelligent; they appear to be intelligent because they achieve better performance, Heuristic algorithms are more efficient because they take advantage of feedback from the data to directthe search path. Uninformed search ‘Also called blind, exhaustive or brute-force search, uses no information about the problem to guide thesearch and therefore may not be very efficient. Informed Search: Also called heuristic or intelligent search, uses information about the problem to guide the search, usually guesses the distance to a goal state and therefore efficient, but the search may not be always possible. Uninformed Search (Blind searches): 1. Breadth First Search: > One simple search strategy is a breadth-first search. In this strategy, the root node is expanded first, then all the nodes generated by the root node are expanded next, and thentheir successors, and so on. > In general, all the nodes at depth d in the search tree are expanded before the nodes at depth d +1 function BREADTHL-FIRST-SEARCH( problem) returns. solution, ofare rode a mode with STAT = proflem INTIAL-STATE, PATH-Cost =0 IW prot Goat -Test{ node STATE) then retare SOLUTION ad) Jrontir a FIFO queue with node as the a ‘explored an emp set loop do WEMITY?( frontier) then retura fire nde ~ POM frontier). /* chooses the sallowest node in fon ad nade, STATE to ape foreach action in problem. ACTIONS(nade STATE) do chad Cun-p-NoDt{ problem. node, action) WW ohld STATE is notin plored or frontier then i prolem GoaL-Test( child STATE) then return SOLUTION ld) frontier INSERT chi, rote) Figure 311 Breit scach na graph BFS illustrated: © scone wine scarStep 1: Initially frontier contains only one node corresponding to the source state A. Figure 1 Frontier: A Step 2: A is removed from fringe. The node is expanded, and its children B and C are generated. They are placed at the back of fringe. Figure 2 Frontier: BC Step 3: Node B is removed from fringe and is expanded. Its children D, E are generated and putat the back of fringe. Figure 3 Frontier: CDE Step 4: Node C is removed from fringe and is expanded. Its children D and G are added to theback of fringe. 2B © scone wih one cameFigure 4 Frontier: DED G Step 5: Node D is removed from fringe. Its children C and F are generated and added to the backof fringe. Figure 5 Frontier: ED G CF Step 6: Node E is removed from fringe. It has no children. Figure 6 Frontier: D GCF Step 7: D is expanded; 8 and F are put in OPEN. © scone wih one cameFigure 7 Frontier: G CF BF Step 8: G is selected for expansion. FES TOU HOBE'S/BOAI HEU, So the algorithm returns thepath A CG by following the parent pointers of the node corresponding to G. The algorithm terminates. Breadth first search is: One of the simplest search strategies Complete. If there is a solution, BFS is guaranteed to find it. If there are multiple solutions, then a minimal solution will be found ‘The algorithm is optimal (i.e., admissible) if all operators have the same cost.Otherwise, breadth first search finds a solution with the shortest path length, Time complexity: O(b*) Space complexity O(b*) Optimality Yes b - branching factor(maximum no of successors of any node), d — Depth of the shallowest goal node Advantages: Maximum length of any path (m) in search space © scone wih one came> BFS will provide a solution if any solution exists. > If there are more than one solutions for a given problem, then BFS will provide the minimal solutionwhich requires the least number of steps. Disadvantages: > Requires the generation and storage of a tree whose size is exponential the depth of theshallowest goal node. > The breadth first search algorithm cannot be effectively used unless the search space isquite small. Applications Of Breadth-First Search Algorithm GPS Navigation systems: Breadth-First Search is one of the best algorithms used to find neighboring locations by using the GPS system, Broadcasting: Networking makes use of what we call as packets for communication. These packets follow a traversal method to reach various networking nodes. One of the most commonly used traversal methods is Breadth-First Search. It is being used as an algorithm that is used to communicate broadcastedpackets across all the nodes in a network. Depth- First- Search. We may sometimes search the goal along the largest depth of the tree, and move up only when further traversal along the depth is not possible. We then attempt to find alternative offspring of the parent of the node (state) last visited. If we visit the nodes of a tree using the above principlesto search the goal, the traversal made is called depth first traversal and consequently the search strategy is called depth first search. function DePrH-LIMITED-SEARCH( problem, limit) returns a solution, or falure/eutomt return RECURSIVE-DLS(MAKE-NODE(problem.INITIAL-STATE), problem, limit) function RECURSIVE-DLS(node, problem, limit) returns a solution, or filure/cutoft if problem.GOAL-Test(node.STATE) then return SOLUTION(node) cbse if limit =0 then return cutoff else cutoff-occurred? — false for each action in problem. ACTIONS(node.STATE) do child — CHILD-NODE( problem, node, action) result — RECURSIVE-DLS( child, problem, limit ~ 1) if result = cutoff then cutoff-occurred? — true else if result # failure then return result if cutoff-occurred” then return cutoff else return failure © scone wine scarDFS illustrated: AState Space Graph Step 1: Initially fringe contains only the node for A. Figure 1 FRINGE: A. Step 2: A is removed from fringe. A is expanded and its children B and C are put in front of fringe Figure 2 FRINGE: BC Step 3: Node B is removed from fringe, and its children D and E are pushed in front of fringe. Figure 3 n © scone wih one cameFRINGE: DEC Step 4: Node D is removed from fringe. C and F are pushed in front of fringe. Figure 4 FRINGE: CFEC Step 5: Node C is removed from fringe. Its child G is pushed in front of fringe. Figure 5 Figure 5 FRINGE: G FEC Step 6: Node G is expanded and found to be a goal node, ow @6 & Figure 6 28 © scone wih one cameFRINGE: GFEC The solution path A-B-D-C-G is returned and the algorithm terminates. Depth first search 1, takes exponential time. 2. If Nis the maximum depth of a node in the search space, in the worst case the algorithm will take time O(b } 3. The space taken is inear in the depth of the search tree, O(bN). Note that the time taken by the algorithm is related to the maximum depth of the search tree. If the search tree has infinite depth, the algorithm may not terminate. This can happen if the search space is infinite. it can also happen if the search space contains cycles. The latter case can be handled by checking for cycles in the algorithm. Thus Depth First Search is not complete. Iterative Deeping DFS ‘The iterative deepening algorithm is a combination of DFS and BFS algorithms. This search algorithm finds out the best depth limit and does it by gradually increasing the limituntil a goal is found. This algorithm performs depth-first search up to a certain “depth limit", and it keeps increasingthe depth limit after each iteration until the goal node is found. Advantages: > It combines the benefits of BFS and DFS search algorit memoryefficiency. Disadvantages: > The main drawback of IDDES is that it repeats all the work of the previous phase. Iterative deepening search L=0 © scone wih one cameIterative deepening search L=1 Ust=1 © Iterative deepening search L=2 Limit= Iterative Deepening Search L=3 Lmk=3 90 N ax J a a evr co we Go ee K © eoco edo eeeees Mis the goal node. So we stop there. Complete: Yes Time: O(bd) Space: O(bd) Optimal: Yes, if step cost = 1 or increasing function of depth. Conclusion: © scone wih one cameWe can conclude that IDS is a hybrid search strategy between BFS and DFS inheriting their advantages. [1 IDSis faster than BFS and DFS. [1 Itissaidthat “IDSisthepreferreduniformedsearchmethodwhen thereisalargesearchspace and the depthof the solution is not known ‘Acomparigon table between DFS, SF and 00FS When to Use? = Don't ear ithe answers closest to the staring vertexroot. = hen graphivee isnot very bigntnte ‘= When space isnot an issue => wen we do carewant the closest answer othe root. = You want a BFS, you don't have enough memory, and ‘somewhat slower performance is accepted In short, you wont a BFS + DFS, Informed search/Heuristic search A heuristic is a method that + might not always find the best solution but is guaranteed to find a good solution inreasonable time. By sacrificing completeness it increases efficiency. + Useful in solving tough problems which 4 could not be solved any other way. © solutions take an infinite time or very long time to compute. Calculating Heuristic Value: 1. Euclidian distance- used to calculate straight line distance. 2.Manhatten distance-if we want to calculate vertical or horizontal distanceFor ex: 8 puzzle problem Source state © scone wih one camedestination state 2 5 8 Then the Manhattan distance would be sum of the no of moves required to move eachnumber from source state to destination state. Numberin8 ]1 [2 3 4 puzzle No. of moves | 0 toreach destination 3. No. of misplaced tiles for 8 puzzle problem Source state Destination state if, 2 3 4 5 6 7 8 Here just calculate the number of tiles that have to be changed to reach goal stateHere 1,5,8 need not be changed 2,3,4,6,7 should be changed, so the heuristic value will be 5(because 5 tiles have to be changed) Hill Climbing Algorithm ¥ Hill climbing algorithm is a local search algorithm which continuously moves in the direction of increasing elevation/value to find the peak of the mountain or best solution tothe problem. It terminates when it reaches a peak value where no neighbor has a higher value. 32 © scone wih one cameIt is also called greedy local search as it only looks to its good immediate neighbor stateand not beyond that. Hill Climbing is mostly used when a good heuristic is available. In this algorithm, we don't need to maintain and handle the search tree or graph as it onlykeeps a single current state. The idea behind hill climbing is as follows. Pick a random point in the search space. Consider all the neighbors of the current state. Choose the neighbor with the best quality and move to that state. Repeat 2 thru 4 until all the neighboring states are of lower quality. Return the current state as the solution state. Ns Different regions in the state space landscape: Local Maximum: Local maximum isa state which is better than its neighbor states, but there is also another state which is higher than it Global Maximum: Global maximum is the best possible state of state space landscape. It has the highest value of objective function, Current state: It is 2 state ina landscape diagram where an agent is currently present. Flat local maximum: It isa flat space in the landscape where all the neighbor states of current states have the same value. Shoulder: itis a plateau region which has an uphill edge. Algorithm for Hil Climbing © scone wih one camereine: cum va Problems in Hill Climbing Algorithm: : Hill-climbing fat. thus requiring random watk, Ridge: Where there are steep Slopes and the search direction is fot towards the top bul towarde the ‘siae. Simulated annealing search © scmnec wn nen SamerA hill-climbing algorithm that never makes “downhill” moves towards states with lower value (orhigher cost) is guaranteed to be incomplete, because it can stuck on a local maximum. In contrast, a purely random walk —that is, moving to a successor chosen uniformly at random from the set of successors ~ is complete, but extremely inefficient. Simulated annealing is an algorithm that combines hill-climbing with a random walk in some way that yields both efficiency and completeness. simulated annealing algorithm. is quite similar to hill climbing. Instead of picking the best move, however, it picks the random move. If the move improves the situation, it is always accepted. Otherwise, the algorithm accepts the move with some probability less than 1. The probability decreases exponentially with the “badness” of the move - the amounf]E by which the evaluation is worsened. The probability also decreases as the "temperature" T goes down: "bad moves are more likely to be allowed at the start when temperature is high, and they become more unlikelyas T decreases. One can prove that if the schedule lowers T slowly enough, the algorithm will find a global optimum with probability approaching 1. Simulated annealing was first used extensively to solve VLSI layout problems. It has been applied widely to factory scheduling and other large-scale optimization tasks. Best First Search: + Acombination of depth first and breadth first searches. * Depth first is good because a solution can be found without computing all nodes and breadth first is good because it does not get trapped in dead ends. + The best first search allows us to switch between paths thus gaining the benefit of both approaches. At each step the most promising node is chosen. If one of the nodes chosen generates nodes that are less promising it is possible to choose another at the same level and in effect the search changes from depth to breadth. If on analysis these are no better than this previously unexpanded node and branch is not forgotten and the search method reverts tothe OPEN is a priority queue of nodes that have been evaluated by the heuristic function but © scone wine scarwhichhave not yet been expanded into successors. The most promising nodes are at the front. CLOSED are nodes that have already been generated and these nodes must be stored because agraph is being used in preference to a tree. Algorithm: 1, Start with OPEN holding the initial state 2. Until a goal is found or there are no nodes left on open do. ‘+ Pick the best node on OPEN + Generate its successors + Foreach successor Do + Ifithas not been generated before ,evaluate it ,add it to OPEN andrecord its parent IFit has been generated before change the parent if this new path is betterand in that case update the cost of getting to any successor nodes. 3. If'a goal is found or no more nodes left in OPEN, quit, else return to 2. Example: © scone wih one came(a) The initial state (b) After expanding Arad 28 (©) After expanding Sibiu It is not optimal. It is incomplete because it can start down an infinite path and never return to try otherpossibilities. The worst-case time complexity for greedy search is O (b"), where m is the maximumdepth of the search space. Because greedy search retains all nodes in memory, its space complexity is the same asitstime complexity A* Algorithm The Best First algorithm is a simplified form of the A* algorithm. The A* search algorithm (pronounced “Ay-star") is a tree search algorithm that finds a path from a given initial node to a given goal node (or one passing a given goal test). It employs a “heuristic estimate" which ranks each node by an estimate of the best route that goes through 37 © scone wine scarthatnode. It visits the nodes in order of this heuristic estimate. Similar to greedy best-first search but is more accurate because A* takes into account the nodes that have already been traversed. From A* we note that f= g +h where gis a measure of the distance/cost to go from the initial node to the current node his an estimate of the distance/cost to solution from the current node. Thus fis an estimate of how long it takes to go from the initial node to the solution Algorithm: OPEN = (S); CLOSED =()els)=0, f{s)=h(s) if OPEN = ( ), Terminate and fail. select the minimum cost state, n, from OPEN, save n in CLOSED 4.Terminate : If n €G, Terminate with success and return f(n) S.Expand for each successor, m, of n a) if m € [OPEN U CLOSED] Set g(m) = a(n) + c(n,, m) Set f(m) = a(m) +h(m) Insert m in OPEN If m € [OPEN U CLOSED] Set g(m) = min { g(m), a(n) +c(n + m)} Set fm) = g(m) + h(m) If f(m) has decreased and m € CLOSED Move m to OPEN. Description: © scone wih one cameAt begins at a selected node. Applied to this node is the “cost” of entering this node {usually zero for the initial node). A* then estimates the distance to the goal node from the current node. This estimate and the cost added together are the heuristic which is assigned to the path leading to this node. The node is then added to a priority queue, oftencalled "open". The algorithm then removes the next node from the priority queue (because of the way a priority queue works, the node removed will have the lowest heuristic). If the queue is empty, there is no path from the initial node to the goal node and the algorithm stops. If the node is the goal node, A* constructs and outputs the successful path and stops. If the node is not the goal node, new nodes are created for all admissible adjoining nodes;the exact way of doing this depends on the problem at hand. For each successive node, A* calculates the "cost" of entering the node and saves it with the node. This cost is calculated from the cumulative sum of costs stored with its ancestors, plus the cost of the operation which reached this new node. The algorithm also maintains a ‘closed! list of nodes whose adjoining nodes have been checked. If a newly generated node is already in this list with an equal or lower cost, no further processing is done on that node or with the path associated with it. If a node in the closed list matches the new one, but has been stored with a higher cost, removed from the closed list, and processing continues on the new node. Next, an estimate of the new node's distance to the goal is added to the cost to form the heuristic for that node. This is then added to the ‘open' priority queue, unless an identical node is found there. (Once the above three steps have been repeated for each new adjoining node, the original node taken from the priority queue is added to the ‘closed’ list. The next node is then popped from the priority queue and the process is repeated © scone wine scar‘The heuristic costs from each city toBucharest: Arad a Mehadia Bucharest Craiova Drobeta Eforie Rimnicu Vilcea Fagaras Sibiu Giurgiu Timisoara Hirsova 5 Urziceni asi Vaslui Lugoj 2 Zerind © scone wih one came(a) The initial state (b) After expanding Arad <) a> a> Hoss wr=ieee spats (©) After expanding Sibiu PRE DSD ASOADHO SLO ASTTOIOD SETAE (® After expanding Pitesti SIBSHTEO ISASESIAD GoP=aeH © scanes win one scanerA® search properties: | The algorithm A is admissible. This means that provided a solution exists, the first solutionfound by A* is an optimal solution, A* is admissible under the following conditions: = Heuristic function: for every node n, hn) | Min 4g) fm Ls EeAH 0 sao 7 wa ya | il | | ca dacatadeh acne aofdn dns AAARBEG0 ’ > Min 3B) 3@ §@ GOA ;O 30 7H 3B 3a AIIAAIIAAIIAAIAIAOTAIOAAAAIESIED ‘a a [. ,a 7G) Min 9G] 30 2@ sara o ae sa 3B KV cd Natachs ooh hats od Acmachdne Is)° 0] bat] jo 78 7B B 3f 72 (ara ;oO sero xa r@ xa | (5) (2) (4) (4) £21(3)(5) 4) (713) (2) 2) 4) (0) (5113) 0) 2174 316) SBN) Max Min La >) “x 2 \o Co oO 1B a Min 9 3G fa paroRe (oro fa 7H Ro AIPAAE AMATO AOAMACESM Properties of minimax: Complete Yes (if tree is finite) Optimal Yes (against an optimal opponent) Time complexity Ob") + Space complexity O(bm) (depth-first exploration) + Forchess, b= 35, m=100 for "reasonable" games > exact solution completely infeasible. 50— Not always feasible to traverse entire tree — Time limitations Alpha-Beta pruning algorithm: Pruning: eliminating a branch of the search tree from consideration without exhaustiveexamination of each node [-[Pruning: the basic idea is to prune portions of the search tree that cannot improve the utility value of the max or min node, by just considering the values of nodes seen sofar. Alpha-beta pruning is used on top of minimax search to detect paths that do not need tobe explored. The intuition is: ‘The MAX player is always trying to maximize the score. Call this 1 The MIN player is always trying to minimize the score. Call this... Alpha cutoff: Given a Max node n, cutoff the search below n (ie., don't generate ‘orexamine any more of n’s children) if alpha(n) >= beta(n) {alpha increases and passes beta from below} Beta cutoff.: Given a Min node n, cutoff the search below n (i.e., don't generate orexamine any more of n's children) if beta(n) <= alpha(n) (beta decreases and passes alpha from above) Carry alpha and beta values down during search Pruning occurs whenever alpha >=beta Algorithm: © scone wine scarExample: 1) Setup phase: Assign to each left-most (or right-most) internal node of the tree, variables: alpha = -infinity, beta = +infinity of res dw i i ou 2) Look at first computed final configuration value. _ It’s a3. Parent is a min node, so set the beta (min) value to 3. ~“?. = 6 a anit oo © scone wih one came3) Look at next value, 5. Since parent is a min node, we want the minimum of 3 and 5 which is 3. Parent min node is done ~ fill alpha (max) value of its parent max node. Always set alpha for max nodes and beta for min nodes. Copy the state of the max parent node into the second unevaluated min child. © scanes win one scaner5) Now, the min parent node has 2 max value of 3 and min value of 2. The value of the 2"° child does not matter. If itis >2, 2 will be selected for min node. If it is <2, it will be selected for min node, but since it is <3 it will not get selected for the parent max node. Thus, we prune the right subtree of the min node. Propagate max value up the tree, aint osint BS me 6) Max node is now done and we can set the beta value of its parent and propagate node state to sibling subtree’s left-most path. a=-int betint © scanes win one scaner7) The next node is 10. 10 is not smaller than 3, so state of parent does not change. We still have to look at the 2 child since alpha is still-inf. 8) The next node is 4. Smallest value goes to the parent min node. Min subtree is done, so the parent max node gets the alpha (max) value from the child. Note that if the max node had a 2™subtree, we can prune it sincea>b. aint ont © scone wine scar9) Continue propagating value up the tree, modifying the corresponding alpha/beta values. Also propagate the state of root node down the left-most path of the right subtree. a=3 aint a=3 be 3 be+inf tin 10) Next value is a 2. We set the beta (min) value of the min parent to 2. Since no other children exist, we propagate the value up the tree. az3 betint nH © scone wine scar11) We have a value for the 3" level max node, now we can modify the beta (min) value of the min parent to 2. Now, we have a situation that a>b and thus the value of the rightmost subtree of the min node does not matter, so we prune the whole subtree. a3 Bo Bm 12) Finally, no more nodes remain, we propagate values up the tree. The root has a value of 3 that comes from the left-most child. Thus, the player should choose the left-most child's move in order to maximize his/her winnings. As you can see, the result is the same as with the mini-max example, but we did not visit all nodes of the tree. a=3 © scone wine scarAO* Search: (And-Or) Graph: The Depth first search and Breadth first search given earlier for OR trees or graphs can be easily adopted by AND-OR graph. The main difference lies in the way termination conditions are determined, since all goals following an AND nodes must be realized; where as a single goal node following an OR node will do. So for this purpose we are using AO* algorithm. Like A* algorithm here we will use two arrays and one heuristic function. OPEN: It contains the nodes that has been traversed but yet not been marked solvable or unsolvable.CLOSE: It contains the nodes that have already been processed.6 7:The distance from current node to goal node. Algorithm: Step 1: Place the starting node into OPEN. Step 2: Compute the most promising solution tree say TO. Step 3: Select a node n that is both on OPEN and a member of TO. Remove it from OPEN and place it in CLOSE Step 4: Ifn is the terminal goal node then leveled n as solved and leveled all the ancestors of n assolved. If the starting node is marked as solved then success and exit. Step 5: Ifn is not a solvable node, then mark n as unsolvable. If starting node is marked as unsolvable,then return failure and exit. ‘Step 6: Expand n. Find all its successors and find their h (n) value, push them into OPEN. Step 7: Return to Step 2 Step 8: Exit. © scone wih one cameImplementation: Let us take the following example to implement the AO* algorithm. setae) i" Ce seivabte \sotabe) , po™e Uasotvatey (Sotabey(g > setae) (onsaiabie) + otvabie) Step In the above graph, the solvable nodes are A, B, C, D, E, F and the unsolvable nodes are G, H. Take A as the starting node. So place A into OPEN. ae ve ore cuose= Nut) [] eS Step? The chides ofA ate B and C wich ae sable So place te i> OPEN and ace Aino he ciost CG) ie OPEN ay - fey Ye) (ey Xe Step: ‘Now prces the nae Band C. Thesilten of and Cae tbe pce into OPEN. Also remove Bnd om OPEN an pace hem iso CLOSE S00PEN= ae © scone wih one came°0" indicated thatthe nodes G and H are unsolvable. Step: Asthe nodes G and H are unsolvable, so place them into CLOSE directly and process the nodes D and E ie OPEN = CLOSE = Steps: Now we have been reached at our goal state. So place F into CLOSE. A t c a” Ew fe CLOSE Step 6: Soccest an Exit Figure Advantages: Itis an optimal algorithm, © scone wih one cameIf traverse according to the ordering of nodes. It can be used for both OR and AND graph. Disadvantages: Sometimes for unsolvable nodes, it can’t find the optimal path. Its complexity is than other algorithms. BASIC KNOWLEDGE REPRESENTATION AND REASONIN( + Humans are best at understanding, reasoning, and interpreting knowledge. Human knows things, which is knowledge and as per their knowledge they perform various actions in the real world. * But how machines do all these things comes under knowledge representation There are three factors which are put into the machine, which makes it valuable: Knowledge: The information related to the environment is stored in the machine. Reasoning: The ability of the machine to understand the stored knowledge. Intelligence: The ability of the machine to make decisions on the basis of the stored information. A knowledge representation language is defined by two aspects: The syntax of a language describes the possible configurations that can constitute sentences. ‘The semantics determines the facts in the world to which the sentences refer. For example, the syntax of the language of arithmetic expressions says that if x and y are expressions denoting numbers, then x > y is a sentence about numbers. The semantics of the languagesays that x > y is false when y is a bigger number than x, and true otherwise From the syntax and semantics, we can derive an inference mechanism for an agent that uses the language. + Recall that the semantics of the language determine the fact to which a given sentence refers.Facts are part of the world, whereas their representations must be encoded in some way that can be physically stored within anagent. We cannot put the world inside a computer (nor can we put it inside a human), so all reasoning mechanisms must operate on representations of facts, rather than on the facts themselves.Because sentences are physical configurations of 61 © scone wine scarparts of the agent, Reasoning must be a process of constructing new physical configurations from old ones. Proper reasoning should ensure that the new configurations represent facts that actually follow from thefacts that the old configurations represent. We want to generate new sentences that are necessarily true, given that the old sentences are true.This relation between sentences is called entailment. In mathematical notation, the relation of entailment between a knowledge base KB and KB Eo, a sentence ais pronounced "KB entails Sand written as ‘An inference procedure can do one of two things: given a knowledge base KB, it can generate new sentences a that purport to be entailed by KB. Eg. xty=4entails4=x+y Entailment is a relationship between sentences (i.e., syntax) that is based on semantics PROPOSITIONAL LOGIC: . Propositional logic (PL) is the simplest form of logic where all the statements are made bypropositions. . A proposition is a declarative statement which is either true or false. It is a technique of knowledge representation in logical and mathematical form Sentence — AtomicSentence ComplexSenrence AtomicSentence — True | False | POR ComplexSentence — ( Sentence ) Sentence Connective Sentence Sentence Connective — A| V | 4] = Figure 68 A BNF (Backus-Naur Form) grammar of sentences in propositional logic. Syntax of propositional logic: + The symbols of prepositional logic are the logical constants True and False, oe © scone wine scarproposition symbolssuch as P and Q, the logical connectives A, V, <=>, =>and and parentheses, + Allsentences are made by putting these symbols together using the following rules: The logical constants True and False are sentences by themselves. . A prepositional symbol such as P or Qis a sentence by itself. Wrapping parentheses around a sentence yields a sentence, for example, (P A Q).A sentence can be formed by combining simpler sentences with one of the five logical connectives: 1. Negation: A sentence such as ~ P is called negation of P. A literal can be either Positive literal ornegative literal Example:P=Today is not Sunday -> +p 1. Conjunction: A sentence which has A connective such as, P A Qis called a conjunction.€xample: Rohan is intelligent and hardworking. It can be written as, "= Rohan is intelligent, Qe Rohan is hardworking. > PA Q. 2. Disjunetion: A sentence which has V connective, such as P V Q. is called disjunction, where P andQ are the propositions. 3 Example: "Ritika is a doctor or Engineer", Here P= Ritika is Doctor. Q= Ritika is Doctor, so we can write it as P VQ. 4, 4, Implication: A sentence such as P > Q, is called an implication. Implications are also known asif-then rules. It can be represented as Ifitis raining, then the street is wet. Let P= Its raining, and Q= Street is wet, so it is represented as P > Q 5. 5, Biconditional: A sentence such as Pe Qs a Biconditional sentence, example If | ambreathing, then | am alive P= | am breathing, Q= | am alive, it can be represented as Pe Q.Precedence of connectives: Precedence Operators First Precedence Parenthesis Second © scone wih one came

You might also like