An Exemplar-Based Context Model for Cognitive Modelling

An Exemplar-Based Context Model for Cognitive Modelling
paly

This study explores a context model based on exemplars for item classification in cognitive modelling. The model is applied within the context theory framework to better understand how humans classify objects.

  • Uploaded on | 1 Views
  • milena milena

About An Exemplar-Based Context Model for Cognitive Modelling

PowerPoint presentation about 'An Exemplar-Based Context Model for Cognitive Modelling'. This presentation describes the topic on This study explores a context model based on exemplars for item classification in cognitive modelling. The model is applied within the context theory framework to better understand how humans classify objects.. The key topics included in this slideshow are cognitive modelling, context theory, exemplar-based, item classification, human cognition,. Download this presentation absolutely free.

Presentation Transcript


1. Cognitive Modelling An exemplar-based context model Benjamin Moloney Student No: 09131175

2. Context theory When classifying an item in a category C, its degree of membership is equal to the sum of its similarity to all examples of that category, divided by its summed similarity to all examples of all categories. U is the set of all examples of all categories

3. Context theory Exemplar Model The exemplar model uses a multiplicative similarity computation: compare the items values on each dimension. If the values on a given dimension are the same, mark a 1 for that dimension. If the values on a given dimension are different, assign a parameter s for that dimension. Multiply the marked values for all dimensions to compute the overall similarity of the two items. The following example shows how the exemplar model could be applied to an experiment on how people classified artificial items (described on three dimensions) in 3 previously-learned artificial categories. The participants were given a number of training items from which they learned to identify diseases, and then had their knowledge tested by being asked to identify new items as a member of 6 different categories (A, B, C, and their conjunctions).

4. Context theory An example Set of category items: < D1:A, D2:A, D3:B > Disease A < D1:A, D2:B, D3:A > Disease A < D1:B, D2:A, D3:A > Disease A < D1:C, D2:A, D3:B > Disease B < D1:B, D2:C, D3:C > Disease B Classifying new item in A: = 0.2 * 1.0 * 1.0 = 0.20 < A, B, A > = 0.2 * 0.5 * 0.3 = 0.03 < C, A, B > < B, A, A > = 0.2 * 1.0 * 0.3 = 0.06 < C, A, B > < C, A, B > = 1.0 * 1.0 * 1.0 = 1.00 < C, A, B > < B, C, C > = 0.2 * 0.5 * 1.0 = 0.10 < C, A, B > S 1 =0.2 S 2 =0.5 S 3 =0.3 Membership(,A) = 0.20+0.03+0.06 0.20+0.03+0.06+1.00+0.10 = 0.21

5. My Model It is this exemplar-based model that formed the basis of my own model. I attempted to model the results of the previous experiment, in which 16 training items were used to learn to identify 5 new test items. The participants rated each new test item as a member of a different category (category A, category B, and category C) or a different conjunction (conjunctions A&B, A&C, and B&C). Before constructing the model based on the exemplar approach I identified several characteristics that could potentially be important in its structure. They were: The high frequency of certain features in several disease categories The basis for the classification of an item in conjunctive categories The attention parameters (s 1 , s 2 , s 3 )

6. Feature frequency per category I wanted to create a model based partially on the identification of certain features having a notably high frequency per dimension in several disease categories. For example, of the four instances of feature A appearing in dimension 1 of the training items, three appear in category A and one appears in the conjunctive category A & B. It was decided that any test item with a feature A in dimension 1 would be compensated when being identified as category A, despite the feature not matching that particular training item. That is, despite the dimension 1 feature of training item 4 being Y, a test item with the dimension 1 feature A would be weighted using a new measure (3.5/4 = 0.875) instead of the lower attention parameter.

7. Justification The reasoning behind this decision was to mimic how the participants identified patterns in the training item features and how they are distributed from category to category. The weight value 3.5/4 = 0.875 was based on the three category A instances plus the instance in the conjunctive (assigned a relative value 0.5). Similarly, high feature frequency was identified in category B (feature B is given weight values of 2.5/3 = 0.833 and 4.5/6 = 0.75 for dimensions 2 and 3 respectively) and category C (feature C appears only in dimension 3 for this category and so is given a weight value of 1 to reflect its importance, and is designated 4/5 = 0.8 for dimension 3). The impact this has on the model varies from almost irrelevant to subtly influential; changing first weight value from a 0.875 to a 0.1 has no tangible affect, while changing the last weight value from a 0.8 to a 0.5, while still higher than the attention parameter, causes a change in predictions of two of the test items.

8. Classification of an item in conjunctive categories My next major decision was how to give each test item a score as a member of each conjunction of categories. It did not seem adequate to compute an items membership in a conjunction by simply combining that items computed membership in the first category and its computed membership in the second, as it treated these computed memberships as independent of each other. I wanted to create a method of calculating a conjunctive membership score that took into account a test items non- membership of a category. To do this I used the formula:

9. Justification This formula awards a high conjunctive category score to those test items that score high in two singular categories which are also nearly equivalent. A combination of this and scoring low in the remaining singular category influences how a test items membership of a conjunctive category is calculated. By doing this I hoped to adequately reflect the human reasoning process about conjunctions, and how it classifies any test item deemed to be a member of such a conjunctive category .

10. Attention Parameters Finally, suitable attention parameters had to be chosen. I reasoned that relatively low values should be chosen, since it would balance out the reward a test item received for possessing a highly frequent feature. Initially these values were set trivially (s 1 = 0.2, s 2 = 0.2, s 3 = 0.2). I decided that a useful method of gauging optimal values for these parameters was to identify the maximum difference between the participants mean membership scores and the models equivalent computed membership scores (the average difference could also be used). Then through a process of trial and error I altered the attention parameters so that this difference would be minimised. I eventually arrived at the values of (s 1 = 0.2, s 2 = 0.4, s 3 = 0.35). The final table of scores attained using the model is as follows:

11. Table of Results

12. Correlation Correlation Score = 0.91

13. Performance and Assessment As can be seen, the models predictions correlates well with those chosen by the participants in the experiment. However, the model does fail to show suitable robustness, and flaws can be identified when it is held up to any moderate level of scrutiny. For example, the predicted category scores are extremely sensitive to changes in the attention parameters (changing s 3 from 0.35 to 0.36, for instance, changes test item 4s designation from C to B). It can be argued from this that the model is tailored to fit the data, and may not produce similarly accurate results when given a new set of test items to predict.

Related