Explainability in practice

by Israel A. Azime and Paloma GarcĂ­a-de-Herreros

In this workshop we will explore SHAP (SHapley Additive exPlanations) which is a game theoretic approach to explain the output of any machine learning model. Using SHAP we can explain decisions made with my machine learning models. For this practical session we will work with the following examples and see how to explain the output of each type of exercise.

The hands-on materials used during the workshop can be found on the following link.

  • Explainability on Tree ensemble models.

    • Tabular data practical task
  • Explainability in Computer Vision

    • MNIST image classification practice task
  • Explainability in Language Models

    • Emotion classification explainability