iml

Interpretable Machine Learning

CRAN Package

Interpretability methods to analyze the behavior and predictions of any machine learning model. Implemented methods are: Feature importance described by Fisher et al. (2018) , accumulated local effects plots described by Apley (2018) , partial dependence plots described by Friedman (2001) , individual conditional expectation ('ice') plots described by Goldstein et al. (2013) , local models (variant of 'lime') described by Ribeiro et. al (2016) , the Shapley Value described by Strumbelj et. al (2014) , feature interactions described by Friedman et. al and tree surrogate models.


Documentation


Team


Insights

Last 30 days

Last 365 days

The following line graph shows the downloads per day. You can hover over the graph to see the exact number of downloads per day.

Data provided by CRAN


Binaries


Dependencies

  • Imports8 packages
  • Suggests24 packages
  • Reverse Imports3 packages
  • Reverse Suggests7 packages