Machine Learning: Address 2.


177 views
Uploaded on:
Description
Day Sky AirTemp Humidity Wind Water Forecast WaterSport. 1 Sunny Warm Normal Strong Warm ... it is a decent day for water sports
Transcripts
Slide 1

Machine Learning: Lecture 2 Concept Learning and Version Spaces (Based on Chapter 2 of Mitchell T.., Machine Learning , 1997)

Slide 2

Birds Things Animals Cars What is a Concept? A Concept is an a subset of items or occasions characterized over a bigger set [Example: The idea of a winged animal is the subset of all articles (i.e., the arrangement for goodness\' sake or all creatures ) that have a place with the classification of bird.] Alternatively, an idea is a boolean-esteemed capacity characterized over this bigger set [Example: a capacity characterized over all creatures whose worth is valid for flying creatures and false for each other animal].

Slide 3

What is Concept-Learning? Given an arrangement of cases named as individuals or non-individuals from an idea, idea learning comprises of consequently deriving the general meaning of this idea. At the end of the day, idea learning comprises of approximating a boolean-esteemed capacity from preparing case of its info and yield.

Slide 4

Example of a Concept Learning errand Concept: Good Days for Water Sports (values: Yes, No) Attributes/Features: Sky (values: Sunny, Cloudy, Rainy) AirTemp (values: Warm, Cold) Humidity (values: Normal, High) Wind (values: Strong, Weak) Water (Warm, Cool) Forecast (values: Same, Change) Example of a Training Point: <Sunny, Warm, High, Strong, Warm, Same, Yes> class

Slide 5

Example of a Concept Learning assignment Database: Day Sky AirTemp Humidity Wind Water Forecast WaterSport 1 Sunny Warm Normal Strong Warm Same Yes 2 Sunny Warm High Strong Warm Same Yes 3 Rainy Cold High Strong Warm Change No 4 Sunny Warm High Strong Cool Change Yes class Chosen Hypothesis Representation: Conjunction of imperatives on every property where: "?" signifies "any worth is adequate" "0" signifies "no quality is satisfactory" Example of a speculation: <?,Cold,High,?,?,?> (If the air temperature is frosty and the stickiness high then it is a decent day for water sports)

Slide 6

Example of a Concept Learning undertaking Goal: To construe the "best" idea portrayal from the arrangement of all conceivable theories ("best" signifies "which best sums up to all (known or obscure) components of the case space". . idea learning is a badly characterized errand) Most General Hypothesis: Everyday is a decent day for water sports <?,?,?,?,?,?> Most Specific Hypothesis: No day is a decent day for water sports <0,0,0,0,0,0>

Slide 7

Terminology and Notation The arrangement of things over which the idea is characterized is known as the arrangement of occasions (meant by X) The idea to be educated is known as the Target Concept (indicated by c: X- - > {0,1}) The arrangement of Training Examples is an arrangement of occurrences, x, alongside their objective idea esteem c(x). Individuals from the idea (cases for which c(x)=1) are called positive cases . Nonmembers of the idea (occurrences for which c(x)=0) are called negative illustrations . H speaks to the arrangement of all conceivable speculations . H is controlled by the human creator\'s decision of a speculation representation. The objective of idea learning is to discover a speculation h:X - > {0,1} with the end goal that h(x)=c(x) for all x in X.

Slide 8

Concept Learning as Search Concept Learning can be seen as the errand of seeking through an extensive space of theories verifiably characterized by the theory representation. Selecting a Hypothesis Representation is an essential stride since it confines (or inclinations ) the space that can be looked. [For case, the speculation "If the air temperature is frosty or the mugginess high then it is a decent day for water sports" can\'t be communicated in our picked representation.]

Slide 9

General to Specific Ordering of Hypotheses Definition: Let hj and hk be boolean-esteemed capacities characterized over X. At that point hj is more-general-than-or-equivalent to hk iff For all x in X, [( hk (x) = 1) - > ( hj (x)=1)] Example: h1 = <Sunny,?,?,Strong,?,?> h2 = <Sunny,?,?,?,?,?> Every occasion that are named positive by h1 will likewise be named positive by h2 in our illustration information set. In this manner h2 is more broad than h1 . We likewise utilize the thoughts of "entirely"- more-general-than, and that\'s only the tip of the iceberg particular than (outline [Mitchell, p. 25])

Slide 10

Find-S, a Maximally Specific Hypothesis Learning Algorithm Initialize h to the most particular theory in H For every positive preparing occasion x For every trait requirement ai in h If the limitation ai is fulfilled by x then do nothing else supplant ai in h by the following more broad imperative that is fulfilled by x Output speculation h

Slide 11

Shortcomings of Find-S Although Find-S finds a theory predictable with the preparation information, it doesn\'t show whether that is the one and only accessible Is it a decent system to favor the most particular theory? Consider the possibility that the preparation set is conflicting ( loud )?. Consider the possibility that there are a few maximally particular reliable theories. Discover S can\'t backtrack!

Slide 12

Version Spaces and the Candidate-Elimination Algorithm Definition: A speculation h is reliable with an arrangement of preparing illustrations D iff h(x) = c(x) for every case <x,c(x)> in D . Definition: The adaptation space , signified VS_H,D , as for speculation space H and preparing cases D , is the subset of theories from H steady with the preparation case in D. NB: While a Version Space can be comprehensively listed, a more conservative representation is favored.

Slide 13

A Compact Representation for Version Spaces Instead of counting every one of the speculations predictable with a preparation set, we can speak to its most particular and most broad limits. The speculations incorporated into between these two limits can be created as required. Definition: The general limit G , regarding speculation space H and preparing information D , is the arrangement of maximally broad individuals from H reliable with D . Definition: The particular limit S , as for theory space H and preparing information D , is the arrangement of insignificantly broad (i.e., maximally particular) individuals from H predictable with D .

Slide 14

Candidate-Elimination Learning Algorithm The hopeful Elimination calculation registers the variant space containing all (and just those) speculations from H that are reliable with a watched arrangement of preparing illustrations. See calculation in [Mitchell, p.33].

Slide 15

Remarks on Version Spaces and Candidate-Elimination The variant space learned by the Candidate-Elimination Algorithm will meet toward the theory that effectively depicts the objective idea gave: (1) There are no mistakes in the preparation illustrations; (2) There is some speculation in H that accurately portrays the objective idea. Meeting can be speeded up by showing the information in a key request. The best illustrations are those that fulfill precisely 50% of the speculations in the present rendition space. Adaptation Spaces can be utilized to dole out conviction scores to the characterization of new cases

Slide 16

Inductive Bias I: A Biased Hypothesis Space Database: Day Sky AirTemp Humidity Wind Water Forecast WaterSport 1 Sunny Warm Normal Strong Cool Change Yes 2 Cloudy Warm Normal Strong Cool Change Yes 3 Rainy Warm Normal Strong Cool Change No Given our past decision of the speculation space representation, no theory is predictable with the above database : we have BIASED the learner to consider just conjunctive theories class

Slide 17

Inductive Bias II: An Unbiased Learner keeping in mind the end goal to take care of the issue created by the predisposition of the speculation space, we can evacuate this inclination and permit the theories to speak to each conceivable subset of occurrences. The past database could then be communicated as: <Sunny, ?,?,?,?,?> v <Cloudy,?,?,?,?,?,?> However, such an unprejudiced learner is not ready to sum up past the watched examples!!!! All the non-watched illustrations will be all around ordered considerably the theories of the adaptation space and misclassified by the other half.

Slide 18

Inductive Bias III: The Futility of Bias-Free Learning Fundamental Property of Inductive Learning A learner that makes no from the earlier suspicions in regards to the personality of the objective idea has no normal premise for grouping any concealed cases. We always have plan of action to inductive predispositions Example: we as a whole realize that the sun will rise tomorrow. In spite of the fact that we can\'t reason that it will do as such in light of the way that it climbed today, yesterday, the day preceding, and so on., we do go out on a limb this or utilize this inductive inclination , normally!

Slide 19

Inductive Bias IV: A Definition Consider an idea learning calculation L for the arrangement of cases X . Give c a chance to be a self-assertive idea characterized over X , and let Dc = {<x,c(x)>} be a self-assertive arrangement of preparing case of c . Let L(xi,Dc) indicate the grouping doled out to the example xi by L in the wake of preparing on the information Dc . The inductive predisposition of L is any insignificant arrangement of statements B with the end goal that for any objective idea c and comparing preparing illustrations Dc (For all xi in X ) [( B ^ Dc ^ xi ) |- - L ( xi , Dc )]

Slide 20

Ranking Inductive Learners as indicated by their Biases Weak Rote-Learner: This framework basically remembers the preparation information and their order - No speculation is included. Applicant Elimination: New cases are characterized just on the off chance that every one of the theories in the variant space concede to the grouping Find-S: New cases are ordered utilizing the most particular theory predictable with the preparation information Bias Strength Strong

Recommended
View more...