Gal A. Kaminka: Publications

Sorted by DateClassified by Publication TypeClassified by TopicGrouped by Student (current)Grouped by Former Students

Removing biases in unsupervised learning of sequential patterns

Yoav Horman and Gal A. Kaminka. Removing biases in unsupervised learning of sequential patterns. Intelligent Data Analysis, 11(5):457–480, 2007.

Download

[PDF]243.0kB  

Abstract

Unsupervised sequence learning is important to many applications. A learner is presented with unlabeled sequential data, and must discover sequential patterns that characterize the data. Popular approaches to such learning include (and often combine) frequency-based approaches and statistical analysis. However, the quality of results is often far from satisfactory. Though most previous investigations seek to address method-specific limitations, we instead focus on general (method-neutral) limitations in current approaches. This paper takes two key steps towards addressing such general quality-reducing flaws. First, we carry out an in-depth empirical comparison and analysis of popular sequence learning methods in terms of the quality of information produced, for several synthetic and real-world datasets, under controlled settings of noise. We find that both frequency-based and statistics-based approaches (i) suffer from common statistical biases based on the length of the sequences considered; (ii) are unable to correctly generalize the patterns discovered, thus flooding the results with multiple instances (with slight variations) of the same pattern. We additionally show empirically that the relative quality of different approaches changes based on the noise present in the data: Statistical approaches do better at high levels of noise, while frequency-based approaches do better at low levels of noise. As our second contribution, we develop methods for countering these common deficiencies. We show how to normalize rankings of candidate patterns such that the relative ranking of different-length patterns can be compared. We additionally show the use of clustering, based on sequence similarity, to group together instances of the same general pattern, and choose the most general pattern that covers all of these. The results show significant improvements in the quality of results in all methods, and across all noise settings.

Additional Information

BibTeX

@Article{jida07, 
  author = 	 {Yoav Horman and Gal A. Kaminka}, 
  title = 	 {Removing biases in unsupervised learning of sequential patterns}, 
  journal = 	JIDA, 
  year = 	{2007}, 
  OPTkey = {}, 
  volume = {11}, 
  number = {5}, 
  pages = {457--480}, 
  OPTmonth = {}, 
  abstract = {
Unsupervised sequence learning is important to many applications. A learner 
is presented with unlabeled sequential data, and must discover 
sequential patterns that characterize the data. Popular approaches 
to such learning include (and often combine) frequency-based approaches and statistical analysis. 
However, the quality of results is often far from satisfactory. Though most previous 
investigations seek to address method-specific limitations, we instead focus on general  (method-neutral) limitations in current approaches. 
This paper takes two key steps towards addressing such general quality-reducing flaws. First, we  carry out an in-depth empirical comparison and analysis of popular sequence learning methods in terms of the quality of information produced, for several synthetic and real-world datasets, under  controlled settings of noise.  We find that both frequency-based and statistics-based approaches 
(i) suffer from common statistical biases based on the length of the sequences considered; 
(ii) are unable to correctly generalize the patterns discovered, thus flooding the results with  multiple instances (with slight variations) of the same pattern. We additionally show empirically  that  the relative quality of different approaches changes based on the noise present in the data:  Statistical approaches do better at high levels of noise, while frequency-based 
approaches do better at low levels of noise.  
As our second contribution, we develop methods for countering these common deficiencies. We show how 
to normalize rankings of candidate patterns such that the relative ranking of different-length  patterns can be compared. We additionally show the use of clustering, based on sequence similarity,  to group together instances of the same general pattern, and choose the most general pattern that  covers all of these.  The results show significant improvements in the quality of results in all  methods, and across all noise settings. 
},
  OPTnote = {}, 
  OPTannote = {}, 
  wwwnote = {}, 
} 

Generated by bib2html.pl (written by Patrick Riley ) on Fri Aug 30, 2024 17:29:51