Header logo is

How to Explain Individual Classification Decisions

2010

Article

ei


After building a classifier with modern tools of machine learning we typically have a black box at hand that is able to predict well for unseen data. Thus, we get an answer to the question what is the most likely label of a given unseen data point. However, most methods will provide no answer why the model predicted a particular label for a single instance and what features were most influential for that particular instance. The only method that is currently able to provide such explanations are decision trees. This paper proposes a procedure which (based on a set of assumptions) allows to explain the decisions of any classification method.

Author(s): Baehrens, D. and Schroeter, T. and Harmeling, S. and Kawanabe, M. and Hansen, K. and Müller, K-R.
Journal: Journal of Machine Learning Research
Volume: 11
Pages: 1803-1831
Year: 2010
Month: June
Day: 0

Department(s): Empirical Inference
Bibtex Type: Article (article)

Digital: 0
Language: en
Organization: Max-Planck-Gesellschaft
School: Biologische Kybernetik

Links: PDF
PDF

BibTex

@article{6670,
  title = {How to Explain Individual Classification Decisions},
  author = {Baehrens, D. and Schroeter, T. and Harmeling, S. and Kawanabe, M. and Hansen, K. and M{\"u}ller, K-R.},
  journal = {Journal of Machine Learning Research},
  volume = {11},
  pages = {1803-1831},
  organization = {Max-Planck-Gesellschaft},
  school = {Biologische Kybernetik},
  month = jun,
  year = {2010},
  doi = {},
  month_numeric = {6}
}