lime
Local Interpretable Model-Agnostic Explanations
When building complex models, it is often difficult to explain why the model should be trusted. While global measures such as accuracy are useful, they cannot be used for explaining why a model made a specific prediction. 'lime' (a port of the 'lime' 'Python' package) is a method for explaining the outcome of black box models by fitting a local model around the point in question an perturbations of this point. The approach is described in more detail in the article by Ribeiro et al. (2016)
- Version0.5.3
- R versionunknown
- LicenseMIT
- Licensefile LICENSE
- Needs compilation?Yes
- Last release08/19/2022
Documentation
Team
Emil Hvitfeldt
Thomas Lin Pedersen
Michaël Benesty
Show author detailsRolesAuthor
Insights
Last 30 days
Last 365 days
The following line graph shows the downloads per day. You can hover over the graph to see the exact number of downloads per day.
Data provided by CRAN
Binaries
Dependencies
- Imports11 packages
- Suggests16 packages
- Linking To2 packages
- Reverse Imports1 package
- Reverse Suggests2 packages