Coupling Specialty Programs and Influence Examination for a Conclusion Mining Application - PowerPoint PPT Presentation

coupling niche browsers and affect analysis for an opinion mining application l.
Skip this Video
Loading SlideShow in 5 Seconds..
Coupling Specialty Programs and Influence Examination for a Conclusion Mining Application PowerPoint Presentation
Coupling Specialty Programs and Influence Examination for a Conclusion Mining Application

play fullscreen
1 / 20
Download Presentation
Download Presentation

Coupling Specialty Programs and Influence Examination for a Conclusion Mining Application

Presentation Transcript

  1. Coupling Niche Browsers and Affect Analysis for an Opinion Mining Application Gregory Grefenstette , Yan Qu , James G.Shanahan , David A.Evans . Clairvoyance Corporation , Pittsburgh , PA , USA

  2. Introduction • Newspapers generally attempt to present the news objectively , but actually not . • Textual analysis shows that many words carry positive or negative emotional charge . • In this article , they show that coupling niche browsing technology and affect analysis technology allows us to create a new application that measures the slant in opinion given to public figures in the popular press . • By coupling a niche browser Google News , which extracts temporally ranked news items from the Web , with Affect Analysis , we can find underlying nuance and slant in news text .

  3. Niche Browsers • Niche browsers produce full-text indexes of documents found on the web , such as Google . • Rather than indexing all the pages that they find , niche browser first classify pages and then only index pages corresponding to a specific class of papers . ( Google News , Research index – ) • Since niche browser do not control the format of the document they must analyze , most depend on entity extraction technology .

  4. Affect Analysis (1/4) • Affect analysis is a natural language processing technique for recognizing the emotive aspect of text . • For example , one description of actors in a civil war may be described as freedom-fighters whereas another describing the same events may use terrorists . • In the 1960s , Stone and Lasswell began building lexicons in which words were labeled with affect . • For example : In the Lasswell Value Dictionary (1969), the word admire for example , was tagged with positive value along the affect dimension RESPECT .

  5. Affect Analysis (2/4) • The dictionary marked words with binary values along eight basic value dimensions (WEALTH, POWER, RECTITUDE, RESPECT, ENLIGHTENMENT, SKILL, AFFECTION, AND WELLBEING ) . • Stone’s work on the General Inquirer Dictionary has continued to this day . • The dictionary now (early 2004) contains 1915 words marked as generally positive and 2291 words as negative . • Words either possess an attitude or not ; there is no question of degree .

  6. Affect Analysis (3/4) • In addition to these manually constructed lexicons that include affect attitudes , work as begin on automatically acquiring affect information . • Hatzivassiloglou & McKeown (1997) demonstrated : • Given a set of emotively charged adjectives , positively oriented adjectives tended to be conjoined to positively oriented adjectives , and negative adjectives to negative ones . Such as “good and honest” or “bad and deceitful” . • They took a number of frequently occurring adjectives that they decided had some type of orientation and then used statistics on whether two adjectives appeared together in a corpus in the pattern X and Y to decide if they had the same orientation .

  7. Affect Analysis (4/4) • Weibe(2000) used a seed set of “subjective” adjectives and a thesaurus generation method to find more subjective adjectives . • Turney & Littman (2003) automatically discover positively and negatively charged words , given fourteen seed words , and using statistics of association from WWW . • They found that positive words tend to associate more often with the positive words than with the negative word . • In addition to merely tagging affect-laden terms as positive or negative , one can also position affect words along more discrimination axes .

  8. Lexicon(1/2) • In the late 1990s , they began development of a lexicon of affect words by hand (Subasic and Huettner) . • Entries in our lexicon consist of five fields : (1) a lemmatized word form (2) a simplified part of speech [adjective, noun, verb, adverb] (3) an affect class (4) a weight for the centrality of a word in that class (5) a weight for the intensity of the word in that class . For example : “gleeful” has been assigned to two affect classes and that it has been deemed more related to the class happiness . “gleeful” adj happiness 0.7 0.6 “gleeful” adj excitement 0.3 0.6

  9. Lexicon(2/2) • Their lexicon contains 2258 words that are classed into 83 affect classes . • They use a simplified version of this affect lexicon : • They have a version of their affect lexicon in which each class (such as happiness) is labeled as positive or negative . • This simplified version looks like the following . The first column contains the affect word , the second contains one of the classes the word has been assigned to , and the third column contains a positive/negative sign associated with that class . admonish warning - adore love +

  10. Entity Directed Opinion Miner (1/2) • Entity-directed opinion miner composed of affect analysis and niche browser , specifically Google News browser . • Our system functions as follows : • The end-user specifies the entity about whom the current public opinion is to be mined , as well as the time period involved . • Our System sends a request to the Google News browser and fetches up to 1000 references to news articles concerning this entity during the specified period .

  11. Entity Directed Opinion Miner (2/2) • Each article is fetched , and the text around the specific entity is extracted (using a KWIC Keyword-in-Context program) . We use 120 characters before and after the entity as a window . • The extracted windows are sorted and duplicates removed (to eliminate duplicate articles portion). • The windows are collated , and all affect words (in any morphological variant) from our lexicon are identified . Affect classes are associated with each affect word using the lexicon . • A score for the entity is produced by dividing the number of instances of a positive affect class by the number of instances of a negative affect class . If there are more positive than negative reference , then , the score will be greater than 1.0 ; if there are more negative references , it will be less than 1.0 .

  12. Example (1/3) • We applied the system by extracting opinion around “Qusay Hussein ,” following his death . • The system was run using the following command : ./getnews “ Qusay Hussein ”“ Qusay ” The first string after the command getnews was sent to Google News to retrieve articles containing this string . By default the 1000 most recent articles mentioning “Qusay Hussein” were retrieved . The second string “Qusay” was used for the KWIC window extraction in the retrieved articles .

  13. Example (2/3) • The KWIC program extracted windows such as the following , centered on the string “Qusay” : • …detaining scores of people . Saddam’s feared sons Uday and Qusay were buried on Saturday on the outskirts of Tirkrit … … more people don’t know about and gave details on the final countdown of the end of Uday’s and Qusay’s reign of terror… These windows were sorted and duplicates eliminated . In the remaining segments , all affect words from our lexicon were identified , e.g. detain , feared , terror , and assigned to their affect classes via lookup in the affect lexicon .

  14. Example (3/3) • No disambiguation was performed to decide which affect class to assign if more than one could be assigned , but words which had ambiguous aspect , i.e. , which belonged to both positively and negatively charged classes were removed from consideration . • In the final step , the score was assigned to “Qusay” by taking the counting number of instances of positively charged affect classes (1536) and the number of instances of negatively charged classes (3736) evoked in the retrieved text around “Qusay” and taking their ratio which yields 1536/3736 = 0.41 . • Because this ratio is less than 1 , there were more negative class words present .

  15. Evaluation (1/4) • We consider two news sources and compared the treatment that they gave to two public personalities , president George Bush and Howard Dean . • We draw stories concerning these figures from two online sources : a conservation newspaper , the Washington Times , and a closer-to-the-center mainstream newspaper , the Washington Post . We applied our affect scoring system to news stories from each source .

  16. Evaluation (2/4) • Though George Bush gets a slightly positive slant in both the Post and the Times , the conservative paper presents more liberal , Democrat Howard Dean is in a predominantly negative fashion . • If we narrow the window of text used around each name , we get the scoring behavior shown in the table below :

  17. Evaluation (3/4) • We see that , in the Washington Times , with decreasing window size , a progressively greater proportion of positively charged words is associated with the name George Bush . • In August 2003 , Arnold Schwarzenegger was running for governor of California , against Gray Davis who was being recalled by the California electorate . • During that period we applied our entity-directed opinion miner to both candidates . Our scorer gave the following scores : Arnold Schwarzenegger 2.17 Gray Davis 1.14

  18. Evaluation (4/4) • The high scores for Schwarzenegger show that the text nearest his name was much more positive than negative in the time leading up to the election that he was to win . • After the election , this affect drops off , in December 2003 , we find now : Arnold Schwarzenegger 1.32 We see that the words associated with Schwarzenegger are still positive but not as much so as during the campaign .

  19. Related Work • Some researcher (Wiebe) have worked on trying to identify whose opinions are the statements founded in an article . • Their research identifies opinion by recognizing rhetorical structure , a more complicated , and as yet unsolved , process than the simple lexical matching we perform here . • Our system deals with newspaper articles which can cover many subjects and persons , and which are usually written as to not express an opinion .

  20. Conclusion • In this initial exploratory work , we have shown that our scoring methods seem to provide intuitively correct results for text that we believe to be for or against a certain personality . • We have also shown that our scoring shows that conservative newspapers provide more positive views for conservative public figures . • In contrast to existing recommender system or automatic rating systems , our application produces scores without training and provides scores for entities in non-opinion-based text . • Our system , coupling these two technologies , provides a rough method for uncovering nuance and slant in otherwise objective text .