Vocabulary Pruning for Improved Context Recognition

Rasmus Elsborg Madsen, Sigurdur Sigurdsson, Lars Kai Hansen, Jan Larsen

AbstractLanguage independent `bag-of-words' representations are
surprisingly effective for text classification. The representation
is high dimensional though, containing many non-consistent words
for text categorization. These non-consistent words result in
reduced generalization performance of subsequent classifiers,
e.g., from ill-posed principal component transformations. In this
communication our aim is to study the effect of reducing the least
relevant words from the bag-of-words representation. We consider a
new approach, using neural network based sensitivity maps and
information gain for determination of term relevancy, when pruning
the vocabularies. With reduced vocabularies documents are
classified using a latent semantic indexing representation and a
probabilistic neural network classifier. Reducing the bag-of-words
vocabularies with 90%-98%, we find consistent classification
improvement using two mid size data-sets. We also study the
applicability of information gain and sensitivity maps for
automated keyword generation.
Keywordsinformation gain, sensitivity, neural networks, text classification, dimensionality reduction
TypeConference paper [With referee]
ConferenceProceedings of the International Joint Conference on Neural Networks
Year2004    Month August    pp. 80-85
PublisherIEEE Press
Notespecial session on machine learning for text mining
Electronic version(s)[pdf]
BibTeX data [bibtex]
IMM Group(s)Intelligent Signal Processing