Model output explanation with SHAP

SHAP is a tool that provides the ability to explain outputs of machine learning models.

It's based on Shapley values from game theory and their related extensions (you can see a detailed explanation in the SHAP documentation).

Running SHAP

You can easily install SHAP via PyPI or conda-forge and then run it in conjunction with using Jupyter Notebooks.

Quick start

We already have a prepared Jupyter notebook with SHAP in our Dogs demo project.

Just follow the steps described in the project's readme and check how SHAP works with models focused on image classification tasks.

Using SHAP with your projects

The official SHAP documentation provides thorough and easy-to-follow guides on how to run SHAP in various environments.

As is integrated with Jupyter Notebooks, running SHAP on the platform is just a matter of following the corresponding tutorials.

Last updated