Header logo is

Pruning nearest neighbor cluster trees


Conference Paper


Nearest neighbor ($k$-NN) graphs are widely used in machine learning and data mining applications, and our aim is to better understand what they reveal about the cluster structure of the unknown underlying distribution of points. Moreover, is it possible to identify spurious structures that might arise due to sampling variability? Our first contribution is a statistical analysis that reveals how certain subgraphs of a $k$-NN graph form a consistent estimator of the cluster tree of the underlying distribution of points. Our second and perhaps most important contribution is the following finite sample guarantee. We carefully work out the tradeoff between aggressive and conservative pruning and are able to guarantee the removal of all spurious cluster structures while at the same time guaranteeing the recovery of salient clusters. This is the first such finite sample result in the context of clustering.

Author(s): Kpotufe, S. and von Luxburg, U.
Pages: 225-232
Year: 2011
Month: July
Day: 0
Editors: Getoor, L. , T. Scheffer
Publisher: International Machine Learning Society

Department(s): Empirical Inference
Bibtex Type: Conference Paper (inproceedings)

Event Name: 28th International Conference on Machine Learning (ICML 2011)
Event Place: Bellevue, WA, USA

Address: Madison, WI, USA
Digital: 0
ISBN: 978-1-450-30619-5

Links: PDF


  title = {Pruning nearest neighbor cluster trees},
  author = {Kpotufe, S. and von Luxburg, U.},
  pages = {225-232},
  editors = {Getoor, L. , T. Scheffer},
  publisher = {International Machine Learning Society},
  address = {Madison, WI, USA},
  month = jul,
  year = {2011},
  month_numeric = {7}