fairness
Algorithmic Fairness Metrics
Offers calculation, visualization and comparison of algorithmic fairness metrics. Fair machine learning is an emerging topic with the overarching aim to critically assess whether ML algorithms reinforce existing social biases. Unfair algorithms can propagate such biases and produce predictions with a disparate impact on various sensitive groups of individuals (defined by sex, gender, ethnicity, religion, income, socioeconomic status, physical or mental disabilities). Fair algorithms possess the underlying foundation that these groups should be treated similarly or have similar prediction outcomes. The fairness R package offers the calculation and comparisons of commonly and less commonly used fairness metrics in population subgroups. These methods are described by Calders and Verwer (2010) doi:10.1007/s10618-010-0190-x, Chouldechova (2017) doi:10.1089/big.2016.0047, Feldman et al. (2015) doi:10.1145/2783258.2783311 , Friedler et al. (2018) doi:10.1145/3287560.3287589 and Zafar et al. (2017) doi:10.1145/3038912.3052660. The package also offers convenient visualizations to help understand fairness metrics.
- Version1.2.2
- R versionunknown
- LicenseMIT
- LicenseLICENSE
- Needs compilation?No
- Languageen-US
- Last release04/14/2021
Documentation
Team
Nikita Kozodoi
Tibor V. Varga
Show author detailsRolesAuthor
Insights
Last 30 days
Last 365 days
The following line graph shows the downloads per day. You can hover over the graph to see the exact number of downloads per day.
Data provided by CRAN
Binaries
Dependencies
- Imports5 packages
- Suggests3 packages
- Reverse Imports1 package
- Reverse Suggests1 package