Middle-Level Features for the Explanation of Classification Systems by Sparse Dictionary Methods.

Middle-Level Features for the Explanation of Classification Systems by Sparse Dictionary Methods. Int J Neural Syst. 2020 Aug;30(8):2050040 Authors: Apicella A, Isgrò F, Prevete R, Tamburrini G Abstract Machine learning (ML) systems are affected by a pervasive lack of transparency. The eXplainable Artificial Intelligence (XAI) research area addresses this problem and the related issue of explaining the behavior of ML systems in terms that are understandable to human beings. In many explanation of XAI approaches, the output of ML systems are explained in terms of low-level features of their inputs. However, these approaches leave a substantive explanatory burden with human users, insofar as the latter are required to map low-level properties into more salient and readily understandable parts of the input. To alleviate this cognitive burden, an alternative model-agnostic framework is proposed here. This framework is instantiated to address explanation problems in the context of ML image classification systems, without relying on pixel relevance maps and other low-level features of the input. More specifically, one obtains sets of middle-level properties of classification inputs that are perceptually salient by applying sparse dictionary learning techniques. These middle-level properties are used as building blocks for explanations of image classifications. The achieved explanations are parsimonious, for their reliance on a limited set...
Source: International Journal of Neural Systems - Category: Neurology Authors: Tags: Int J Neural Syst Source Type: research