Finding Subjectivity Clues

  • Uploaded by: ysrgrathe
  • 0
  • 0
  • November 2019
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Finding Subjectivity Clues as PDF for free.

More details

  • Words: 363
  • Pages: 9
Finding subjectivity clues; sentence and clause-level classification Reactions

Sentiment Retrieval Using Generative Models authors: Eguchi, K. & Lavrenko, V. read by: John Knox

Sentiment retrieval: combined IR and

sentiment classification Novel idea Does it work? No reported recall, low precision…but competitive with similar systems

Automatic Identification Of Sentiment Vocabulary read by: Michael Lipschultz authors: Wilson, T., Wiebe, J., Hoffmann, P.

Two step: neutral-polar then determine

polarity Discussion of the role “not” and “will” Reduce some errors by allowing neutral terms in stage 2 Relation to “Identifying Subjective Adjectives through Web-based Mutual Information,” Baroni, M. & Vegnaduzzo, S. This work is concerned with finding opinions Wilson et. al is concerned with the next step

Using Emoticons to reduce Dependency in Machine Learning Techniques authors: Read J. read by: Yaw Gyamfi

Lack of rigor in analysis Rare work in that it looks at temporal

dependency…but is it persuasive? Title is misleading – emoticons are used more for reduction of annotation costs

Identifying Expressions of Opinion in Context authors: Breck, E., Choi, Y. & Cardie, C. read by: Matt McGettigan

Reviewer impressed with performance (close

to human annotators)…but… Concern with evaluation standards Subjective phrases as a natural extension from subjective adjectives

Feature Subsumption for Opinion Analysis authors: Riloff, E., Patwardhan, S. & Wiebe, J. read by: Mahesh

Considering dependencies among features

can add considerable performance Considering POS as subsuming unigrams

Mining the Peanut Gallery: Opinion Extraction and Semantic Classification of Product Reviews authors: Dave K., Lawrance S. and Pennock D.

Supposedly comparing IR vs. Machine

learning techniques, but the IR approach skews heavily towards machine learning approach Comparison to emoticon paper: explicit rating (self tagging) vs. automatic identification Granularity: some features that help at e.g. sentence level are less useful at the document level

Extracting Appraisal Expressions authors: Bloom, K., Garg, N. & Argamon., S. read by: Danielle Mowery

Significantly different annotation schema

compared to MPQA Author evaluated system output post facto Can’t evaluate precision Ample opportunity for bias

Major themes IR for sentiment analysis Problems with evaluation standards Weak standards Lack of rigor in analysis Not enough data supplied (e.g. accuracy only)

Increasing sophistication of features Multi-stage approaches Dependencies among features Levels of tagging (phrase/sentence/document)

Related Documents

Finding Subjectivity Clues
November 2019 12
Septoct Clues
June 2020 13
Context Clues
May 2020 33
Crossword Clues
May 2020 9
Context Clues
June 2020 20

More Documents from ""

Finding Subjectivity Clues
November 2019 12
Building Lexicon
November 2019 16
Multi_obj
November 2019 8
Whitepaper
November 2019 34