iml
Interpretable Machine Learning
Interpretability methods to analyze the behavior and predictions of any machine learning model. Implemented methods are: Feature importance described by Fisher et al. (2018) doi:10.48550/arxiv.1801.01489, accumulated local effects plots described by Apley (2018) doi:10.48550/arxiv.1612.08468, partial dependence plots described by Friedman (2001) www.jstor.org/stable/2699986, individual conditional expectation ('ice') plots described by Goldstein et al. (2013) doi:10.1080/10618600.2014.907095, local models (variant of 'lime') described by Ribeiro et. al (2016) doi:10.48550/arXiv.1602.04938, the Shapley Value described by Strumbelj et. al (2014) doi:10.1007/s10115-013-0679-x, feature interactions described by Friedman et. al doi:10.1214/07-AOAS148 and tree surrogate models.
- Version0.11.3
- R versionunknown
- LicenseMIT
- LicenseLICENSE
- Needs compilation?No
- iml citation info
- Last release04/27/2024
Documentation
Team
Giuseppe Casalicchio
MaintainerShow author detailsPatrick Schratz
Christoph Molnar
Show author detailsRolesAuthor
Insights
Last 30 days
This package has been downloaded 6,761 times in the last 30 days. Impressive! The kind of number that makes colleagues ask, 'How did you do it?' The following heatmap shows the distribution of downloads per day. Yesterday, it was downloaded 216 times.
The following line graph shows the downloads per day. You can hover over the graph to see the exact number of downloads per day.
Last 365 days
This package has been downloaded 59,015 times in the last 365 days. This work is reaching a lot of screens. A significant achievement indeed! The day with the most downloads was Feb 25, 2025 with 441 downloads.
The following line graph shows the downloads per day. You can hover over the graph to see the exact number of downloads per day.
Data provided by CRAN
Binaries
Dependencies
- Imports8 packages
- Suggests23 packages
- Reverse Imports3 packages
- Reverse Suggests6 packages