UNIT I: Concept Learning
UNIT I: Concept Learning
1
Features from Computer View
2
Representing Hypotheses
Many possible representations for hypotheses h
Idea: h as conjunctions of constraints on features
Each constraint can be:
– a specific value (e.g., Nose = Square)
– don’t care (e.g., Eyes = ?)
– no value allowed (e.g., Water=Ø)
For example,
Eyes Nose Head Fcolor Hair?
<Round, ?, Round, ?, No>
?
3
Prototypical Concept Learning Task
Given:
– Instances X: Faces, each described by the attributes
Eyes, Nose, Head, Fcolor, and Hair?
– Target function c: Smile? : X -> { no, yes }
– Hypotheses H: Conjunctions of literals such as
<?,Square,Square,Yellow,?>
– Training examples D: Positive and negative examples
of the target function
x1 , c( x1 ) , x2 , c( x2 ) ,..., xm , c( xm )
Determine: a hypothesis h in H such that h(x)=c(x)
for all x in D.
4
Inductive Learning Hypothesis
Any hypothesis found to approximate the target
function well over a sufficiently large set of
training examples will also approximate the target
function well over other unobserved examples.
5
Instances, Hypotheses, and More-General-Than
Instances X Hypotheses H
General
h3
h1 h2
Specific
x 1 =<Round,Square,Square,Purple,Yes> h 1 =<Round,?,Square,?,?>
x 2 =<Round,Square,Round,Green,Yes> h 2 =<Round,?,?,?,Yes>
h 3 =<Round,?,?,?,?>
6
Find-S Algorithm
1. Initialize h to the most specific hypothesis in H
2. For each positive training instance x
For each attribute constraint ai in h
IF the constraint ai in h is satisfied by x THEN
do nothing
ELSE
replace ai in h by next more general constraint satisfied by x
3. Output hypothesis h
7
Hypothesis Space Search by Find-S
Instances X Hypotheses H
h5
x
General
2
x x h 3,4
1 3
x
5
x h 1,2
4 Specific
h0
h 0 =< >
x 1 =<Round,Triangle,Round,Purple,Yes> + h 1 =<Round,Triangle,Round,Purple,Yes>
x 2 =<Square,Square,Square,Green,Yes> - h 2 =<Round,Triangle,Round,Purple,Yes>
x 3 =<Square,Triangle,Round,Yellow,Yes> + h 3 =<?,Triangle,Round,?,Yes>
x 4 =<Round,Triangle,Round,Green,No> - h 4 =<?,Triangle,Round,?,Yes>
x 5 =<Square,Square,Round,Yellow,Yes> + h 5 =<?,?,Round,?,Yes>
8
Complaints about Find-S
• Cannot tell whether it has learned concept
• Cannot tell when training data inconsistent
• Picks a maximally specific h (why?)
• Depending on H, there might be several!
9
The List-Then-Eliminate Algorithm
1. Set VersionSpace equal to a list containing every
hypothesis in H
2. For each training example, <x,c(x)>
remove from VersionSpace any hypothesis h for which
h(x) != c(x)
3. Output the list of hypotheses in VersionSpace
11
Example Version Space
G: { <?,?,Round,?,?> <?,Triangle,?,?,?> }
S: { <?,Triangle,Round,?,Yes> }
12
Representing Version Spaces
The General boundary, G, of version space VSH,D is
the set of its maximally general members.
14
Candidate Elimination Algorithm (cont)
For each training example d, do (cont)
If d is a negative example
Remove from S any hypothesis that does include d
For each hypothesis g in G that does include d
Remove g from G
Add to G all minimal generalizations h of g such that
1. h does not include d, and
2. Some member of S is more specific than h
Remove from G any hypothesis that is less general
than another hypothesis in G
15
Example Trace
G1 G0: { <?,?,?,?,?> }
X2=<S,S,S,G,Y> -
S0: { <Ø,Ø,Ø,Ø,Ø> }
16
What Training Example Next?
G: { <?,?,Round,?,?> <?,Triangle,?,?,?> }
S: { <?,Triangle,Round,?,Yes> }
17
How Should These Be Classified?
G: { <?,?,Round,?,?> <?,Triangle,?,?,?> }
S: { <?,Triangle,Round,?,Yes> }
? ? ?
18
What Justifies this Inductive Leap?
+ < Round, Triangle, Round, Purple, Yes >
+ < Square, Triangle, Round, Yellow, Yes >
19
An UN-Biased Learner
Idea: Choose H that expresses every teachable
concept (i.e., H is the power set of X)
Consider H’ = disjunctions, conjunctions, negations
over previous H.
For example:
?, Triangle , Round , ?, Yes Square, Square, ?, Purple, ?
20
Inductive Bias
Consider
– concept learning algorithm L
– instances X, target concept c
– training examples Dc={<x,c(x)>}
– let L(xi,Dc) denote the classification assigned to the
instance xi by L after training on data Dc.
Definition:
The inductive bias of L is any minimal set of assertions B
such that for any target concept c and corresponding
training examples Dc
(xi X )[( B Dc xi ) L( xi , Dc )]
where A B means A logically entails B
21
Inductive Systems and Equivalent Deductive Systems
Inductive System
Candidate Classification of
Training examples new instance, or
Elimination
Algorithm "don't know"
New instance
Using Hypothesis
Space H
22