Automated Data-Agnostic Global Explanations of Machine Learning Models by Means of AutoAnchors

Machine learning has produced numerous success stories over the years. For instance, the voice assistants Siri and Alexa are able to process verbal user requests and respond to them, and Netflix is capable of making automated and personalized movie recommendations. Yet, research also revealed some crucial weaknesses of machine learning algorithms and thereby showed their imperfectness and vulnerability. Human biases and bad habits stored in historic data easily exercise negative effects on algorithms and can lead to unfair decision making (Guidotti et al., 2018).

Due to the complexity of some machine learning methods, the reasoning behind individual decisions remains usually unclear and thus a models' decisions cannot be understood by humans (without further explanations). Especially for decisions based on highly sensitive data the black-box behaviour of machine learning models is problematic for enterprises. Recent governmental developments resulted in the introduction of the "General Data Protection Regulation (GDPR)" that contains a right for explanation, which directly affects autonomous decision making and confirms the essential need for interpretable models (Goodman and Flaxman 2016).

Missing trust and acceptance for machine learning methods led to a new research field known as explainable artificial intelligence (XAI), which is devoted to (a) give explanations for individual decisions of black-box decision systems (local explanation), and (b) explain the behaviour of the whole black-box (global explanation). A promising and model-agnostic method for explaining local decisions of singular instances is called Anchors and was introduced by Ribeiro et al. (2018). An explanation rule, the so-called anchor, is computed by perturbing features of the instance and checking for the outcome of the black-box afterwards. A rule that surpasses a certain precision threshold and has a clear coverage is considered a rule that anchors the made decision. However, a major limitation of Anchors is its data-dependency. Ribeiro et al. (2018) provide a reasonable parametrization of their method, but an actual sensitivity analysis of the chosen parameters for different use-cases is left for future research. 

The goal of this thesis is to discuss and evaluate the possibilities for making Anchors data-agnostic (at least to a certain degree) by means of using techniques from automated machine learning (AutoML, see, e.g., Hutter et al., 2019). More precisely, as one of the results from this thesis a self-adaptive XAI method, called AutoAnchors, should be developed, which should be capable of configuring its hyperparameters automatically to the data at hand.

 

References

  • Bryce Goodman & Seth Flaxman (2016). EU regulations on algorithmic decision-making and a "right to explanation". In ICML workshop on human interpretability in machine learning. [Link]
  • Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Dino Pedreschi, Franco Turini & Fosca Giannotti (2018) Local rule-based explanations of black box decision systems. CoRR abs/1805.10820. [Link]
  • Frank Hutter, Lars Kotthoff & Joaquin Vanschoren (2018). Automated Machine Learning: Methods, Systems, Challenges, Springer. [Link]
  • Marco Tulio Ribeiro, Sameer Singh & Carlos Guestrin (2018). Anchors: High-Precision Model-Agnostic Explanations. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence. AAAI [Link]