madgrad
'MADGRAD' Method for Stochastic Optimization
A Momentumized, Adaptive, Dual Averaged Gradient Method for Stochastic Optimization algorithm. MADGRAD is a 'best-of-both-worlds' optimizer with the generalization performance of stochastic gradient descent and at least as fast convergence as that of Adam, often faster. A drop-in optim_madgrad() implementation is provided based on Defazio et al (2020) doi:10.48550/arXiv.2101.11075.
- Version0.1.0
- R versionunknown
- LicenseMIT
- LicenseLICENSE
- Needs compilation?No
- Last release05/10/2021
Documentation
Team
Daniel Falbel
MaintainerShow author detailsRStudio
Show author detailsRolesCopyright holderMADGRAD original implementation authors.
Show author detailsRolesCopyright holder
Insights
Last 30 days
This package has been downloaded 118 times in the last 30 days. More than a random curiosity, but not quite a blockbuster. Still, it's gaining traction! The following heatmap shows the distribution of downloads per day. Yesterday, it was downloaded 3 times.
The following line graph shows the downloads per day. You can hover over the graph to see the exact number of downloads per day.
Last 365 days
This package has been downloaded 1,626 times in the last 365 days. Now we’re talking! This work is officially 'heard of in academic circles', just like those wild research papers on synthetic bananas. The day with the most downloads was Jan 22, 2025 with 29 downloads.
The following line graph shows the downloads per day. You can hover over the graph to see the exact number of downloads per day.
Data provided by CRAN
Binaries
Dependencies
- Imports2 packages
- Suggests1 package