0% found this document useful (0 votes)
38 views

Lecture 2

(1) This document summarizes a machine learning lecture on concept learning. (2) It discusses representing hypotheses as conjunctions of constraints on attributes and learning from positive and negative examples to find the most specific hypothesis consistent with the training data. (3) The document also introduces version spaces and the candidate elimination algorithm for maintaining the set of hypotheses consistent with examples seen so far.

Uploaded by

sumathi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views

Lecture 2

(1) This document summarizes a machine learning lecture on concept learning. (2) It discusses representing hypotheses as conjunctions of constraints on attributes and learning from positive and negative examples to find the most specific hypothesis consistent with the training data. (3) The document also introduces version spaces and the candidate elimination algorithm for maintaining the set of hypotheses consistent with examples seen so far.

Uploaded by

sumathi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 31

Machine Learning 2D5362

Lecture 2:
Concept Learning
Outline
Learning from examples
General-to specific ordering of hypotheses
Version spaces and candidate elimination
algorithm
Inductive bias
Training Examples for Concept
Enjoy Sport
Concept: days on which my friend Aldo enjoys his favourite
water sports
Task: predict the value of Enjoy Sport for an arbitrary day
attributes
based on the values of the other attributes

Sky Temp Humid Wind Water Fore- Enjoy


cast Sport
Sunny Warm Normal Strong Warm Same Yes
instance
Sunny Warm High Strong Warm Same Yes
Rainy Cold High Strong Warm Change No
Sunny Warm High Strong Cool Change Yes
Representing Hypothesis
Hypothesis h is a conjunction of constraints on
attributes
Each constraint can be:
A specific value : e.g. Water=Warm
A dont care value : e.g. Water=?
No value allowed (null hypothesis): e.g. Water=
Example: hypothesis h
Sky Temp Humid Wind Water Forecast
< Sunny ? ? Strong ? Same >
Prototypical Concept Learning
Task
Given:
Instances X : Possible days decribed by the attributes
Sky, Temp, Humidity, Wind, Water, Forecast
Target function c: EnjoySport X {0,1}
Hypotheses H: conjunction of literals e.g.

< Sunny ? ? Strong ? Same >


Training examples D : positive and negative examples of
the target function: <x1,c(x1)>,, <xn,c(xn)>
Determine:
A hypothesis h in H such that h(x)=c(x) for all x in D.
Inductive Learning Hypothesis
Any hypothesis found to approximate the
target function well over the training
examples, will also approximate the target
function well over the unobserved examples.
Number of Instances,
Concepts, Hypotheses
Sky: Sunny, Cloudy, Rainy
AirTemp: Warm, Cold

Humidity: Normal, High

Wind: Strong, Weak

Water: Warm, Cold

Forecast: Same, Change

#distinct instances : 3*2*2*2*2*2 = 96


#distinct concepts : 296
#syntactically distinct hypotheses : 5*4*4*4*4*4=5120
#semantically distinct hypotheses : 1+4*3*3*3*3*3=973
General to Specific Order
Consider two hypotheses:
h1=< Sunny,?,?,Strong,?,?>

h2=< Sunny,?,?,?,?,?>

Set of instances covered by h1 and h2:


h2 imposes fewer constraints than h1 and therefore classifies more
instances x as positive h(x)=1.

Definition: Let hj and hk be boolean-valued functions defined over X.


Then hj is more general than or equal to hk (written hj hk) if and
only if
x X : [ (hk(x) = 1) (hj(x) = 1)]
The relation imposes a partial order over the hypothesis space H
that is utilized many concept learning methods.
Instance, Hypotheses and
more general
Instances Hypotheses
specific

x1 h3 h1
h2 h1 h2
x2
h2 h3 general

x1=< Sunny,Warm,High,Strong,Cool,Same> h1=< Sunny,?,?,Strong,?,?>


x2=< Sunny,Warm,High,Light,Warm,Same> h2=< Sunny,?,?,?,?,?>
h3=< Sunny,?,?,?,Cool,?>
Find-S Algorithm
1. Initialize h to the most specific hypothesis in H
2. For each positive training instance x
For each attribute constraint ai in h
If the constraint ai in h is satisfied by x
then do nothing
else replace ai in h by the next more
general constraint that is satisfied by x
3. Output hypothesis h
Hypothesis Space Search by
Find-S
Instances Hypotheses
x3 specific
h0
h1
x1
x2 h2,3
x4 h4
general

x1=<Sunny,Warm,Normal,Strong,Warm,Same>+ h0=< , , , , , ,>


h1=< Sunny,Warm,Normal,
x2=<Sunny,Warm,High,Strong,Warm,Same>+ Strong,Warm,Same>
h2,3=< Sunny,Warm,?,
x3=<Rainy,Cold,High,Strong,Warm,Change> - Strong,Warm,Same>
x4=<Sunny,Warm,High,Strong,Cool,Change> + h4=< Sunny,Warm,?,
Strong,?,?>
Properties of Find-S
Hypothesis space described by conjunctions
of attributes
Find-S will output the most specific
hypothesis within H that is consistent with the
positve training examples
The output hypothesis will also be consistent
with the negative examples, provided the
target concept is contained in H.
Complaints about Find-S
Cant tell if the learner has converged to the target
concept, in the sense that it is unable to determine
whether it has found the only hypothesis consistent
with the training examples.
Cant tell when training data is inconsistent, as it
ignores negative training examples.
Why prefer the most specific hypothesis?
What if there are multiple maximally specific
hypothesis?
Version Spaces
A hypothesis h is consistent with a set of
training examples D of target concept if and
only if h(x)=c(x) for each training example
<x,c(x)> in D.
Consistent(h,D) := <x,c(x)>D h(x)=c(x)
The version space, VSH,D , with respect to
hypothesis space H, and training set D, is the
subset of hypotheses from H consistent with
all training examples:
VSH,D = {h H | Consistent(h,D) }
List-Then Eliminate Algorithm
1. VersionSpace a list containing every
hypothesis in H
2. For each training example <x,c(x)>
remove from VersionSpace any
hypothesis that is inconsistent with the
training example h(x) c(x)
3. Output the list of hypotheses in
VersionSpace
Example Version Space
S: {<Sunny,Warm,?,Strong,?,?>}

<Sunny,?,?,Strong,?,?> <Sunny,Warm,?,?,?,?> <?,Warm,?,Strong,?,?>

G: {<Sunny,?,?,?,?,?>, <?,Warm,?,?,?>, }

x1 = <Sunny Warm Normal Strong Warm Same> +


x2 = <Sunny Warm High Strong Warm Same> +
x3 = <Rainy Cold High Strong Warm Change> -
x4 = <Sunny Warm High Strong Cool Change> +
Representing Version Spaces
The general boundary, G, of version space VSH,D
is the set of maximally general members.
The specific boundary, S, of version space VSH,D
is the set of maximally specific members.
Every member of the version space lies between
these boundaries
VSH,D = {h H| ( s S) ( g G) (g h s)
where x y means x is more general or equal than y
Candidate Elimination
Algorithm
G maximally general hypotheses in H
S maximally specific hypotheses in H
For each training example d=<x,c(x)>
If d is a positive example
Remove from G any hypothesis that is inconsistent with d
For each hypothesis s in S that is not consistent with d
remove s from S.

Add to S all minimal generalizations h of s such that


h consistent with d
Some member of G is more general than h
Remove from S any hypothesis that is more general than
another hypothesis in S
Candidate Elimination
Algorithm
If d is a negative example
Remove from S any hypothesis that is inconsistent with d
For each hypothesis g in G that is not consistent with d
remove g from G.

Add to G all minimal specializations h of g such that

h consistent with d
Some member of S is more specific than h
Remove from G any hypothesis that is less general than
another hypothesis in G
Example Candidate Elimination
S: {<, , , , , >}

G: {<?, ?, ?, ?, ?, ?>}
x1 = <Sunny Warm Normal Strong Warm Same> +

S: {< Sunny Warm Normal Strong Warm Same >}

G: {<?, ?, ?, ?, ?, ?>}

x2 = <Sunny Warm High Strong Warm Same> +


S: {< Sunny Warm ? Strong Warm Same >}

G: {<?, ?, ?, ?, ?, ?>}
Example Candidate Elimination
S: {< Sunny Warm ? Strong Warm Same >}

G: {<?, ?, ?, ?, ?, ?>}

x3 = <Rainy Cold High Strong Warm Change> -


S: {< Sunny Warm ? Strong Warm Same >}

G: {<Sunny,?,?,?,?,?>, <?,Warm,?,?,?>, <?,?,?,?,?,Same>}


x4 = <Sunny Warm High Strong Cool Change> +
S: {< Sunny Warm ? Strong ? ? >}

G: {<Sunny,?,?,?,?,?>, <?,Warm,?,?,?> }
Example Candidate Elimination
Instance space: integer points in the x,y plane
hypothesis space : rectangles, that means hypotheses
are of the form a x b , c y d.

Homework: Exercise 2.4


Classification of New Data
S: {<Sunny,Warm,?,Strong,?,?>}

<Sunny,?,?,Strong,?,?> <Sunny,Warm,?,?,?,?> <?,Warm,?,Strong,?,?>

G: {<Sunny,?,?,?,?,?>, <?,Warm,?,?,?>, }

x5 = <Sunny Warm Normal Strong Cool Change> + 6/0


x6 = <Rainy Cold Normal Light Warm Same> - 0/6
x7 = <Sunny Warm Normal Light Warm Same> ? 3/3
x8 = <Sunny Cold Normal Strong Warm Same> ? 2/4
Inductive Leap
+ <Sunny Warm Normal Strong Cool Change>
+ <Sunny Warm Normal Light Warm Same>

S : <Sunny Warm Normal ? ? ?>

How can we justify to classify the new example as


+ <Sunny Warm Normal Strong Warm Same>

Bias: We assume that the hypothesis space H contains


the target concept c. In other words that c can be
described by a conjunction of literals.
Biased Hypothesis Space
Our hypothesis space is unable to represent a
simple disjunctive target concept :
(Sky=Sunny) v (Sky=Cloudy)
x1 = <Sunny Warm Normal Strong Cool Change> +
x2 = <Cloudy Warm Normal Strong Cool Change> +

S : { <?, Warm, Normal, Strong, Cool, Change> }

x3 = <Rainy Warm Normal Light Warm Same> -


S : {}
Unbiased Learner
Idea: Choose H that expresses every teachable
concept, that means H is the set of all possible
subsets of X called the power set P(X)
|X|=96, |P(X)|=296 ~ 1028 distinct concepts
H = disjunctions, conjunctions, negations
e.g. <Sunny Warm Normal ? ? ?> v <? ? ? ? ? Change>
H surely contains the target concept.
Unbiased Learner
What are S and G in this case?
Assume positive examples (x1, x2, x3) and
negative examples (x4, x5)
S : { (x1 v x2 v x3) } G : { (x4 v x5) }
The only examples that are classified are the training
examples themselves. In other words in order to learn
the target concept one would have to present every single
instance in X as a training example.
Each unobserved instance will be classified positive by
precisely half the hypothesis in VS and negative by the
other half.
Futility of Bias-Free Learning
A learner that makes no prior assumptions
regarding the identity of the target concept
has no rational basis for classifying any
unseen instances.

No Free Lunch!
Inductive Bias
Consider:
Concept learning algorithm L

Instances X, target concept c

Training examples Dc={<x,c(x)>}

Let L(xi,Dc ) denote the classification assigned to


instance xi by L after training on Dc.
Definition:
The inductive bias of L is any minimal set of assertions
B such that for any target concept c and
corresponding training data Dc
(xi X)[B Dc xi] |-- L(xi, Dc)
Where A |-- B means that A logically entails B.
Inductive Systems and
Equivalent Deductive Systems
training classification of
examples candidate elimination new instance or
algorithm dont know
new instance
using hypothesis space H

equivalent deductive system


training classification of
examples theorem prover new instance or
new instance dont know
assertion H
contains target
concept
Three Learners with Different
Biases
Rote learner: Store examples classify x if and only if
it matches a previously observed example.
No inductive bias

Version space candidate elimination


algorithm.
Bias: The hypothesis space contains the target
concept.
Find-S
Bias: The hypothesis space contains the target
concept and all instances are negative instances
unless the opposite is entailed by its other
knowledge.

You might also like