lime
Local Interpretable Model-Agnostic Explanations
When building complex models, it is often difficult to explain why the model should be trusted. While global measures such as accuracy are useful, they cannot be used for explaining why a model made a specific prediction. 'lime' (a port of the 'lime' 'Python' package) is a method for explaining the outcome of black box models by fitting a local model around the point in question an perturbations of this point. The approach is described in more detail in the article by Ribeiro et al. (2016) doi:10.48550/arXiv.1602.04938.
- Version0.5.3
- R versionunknown
- LicenseMIT
- LicenseLICENSE
- Needs compilation?Yes
- Last release08/19/2022
Documentation
Team
Emil Hvitfeldt
MaintainerShow author detailsThomas Lin Pedersen
Show author detailsRolesAuthorMichaël Benesty
Show author detailsRolesAuthor
Insights
Last 30 days
The following line graph shows the downloads per day. You can hover over the graph to see the exact number of downloads per day.
Last 365 days
The following line graph shows the downloads per day. You can hover over the graph to see the exact number of downloads per day.
Data provided by CRAN
Binaries
Dependencies
- Imports7 packages
- Suggests16 packages
- Linking To2 packages
- Reverse Imports1 package
- Reverse Suggests1 package