Click here to flash read.
The notion of concept drift refers to the phenomenon that the distribution
generating the observed data changes over time. If drift is present, machine
learning models can become inaccurate and need adjustment. While there do exist
methods to detect concept drift or to adjust models in the presence of observed
drift, the question of explaining drift, i.e., describing the potentially
complex and high dimensional change of distribution in a human-understandable
fashion, has hardly been considered so far. This problem is of importance since
it enables an inspection of the most prominent characteristics of how and where
drift manifests itself. Hence, it enables human understanding of the change and
it increases acceptance of life-long learning models. In this paper, we present
a novel technology characterizing concept drift in terms of the characteristic
change of spatial features based on various explanation techniques. To do so,
we propose a methodology to reduce the explanation of concept drift to an
explanation of models that are trained in a suitable way extracting relevant
information regarding the drift. This way a large variety of explanation
schemes is available. Thus, a suitable method can be selected for the problem
of drift explanation at hand. We outline the potential of this approach and
demonstrate its usefulness in several examples.
No creative common's license