Loading benchmark data...
Test Functions
Methodology
This project benchmarks algorithms available in the
pytorch_optimizer library. The
evaluation process consists of:
-
Hyperparameter Tuning: Used
Optunato search for optimal parameters (LR, momentum, etc.) over a fixed number of trials. - Execution: The optimizer is run using the best hyperparameters found.
- Visualization: The trajectory is recorded and plotted on the contour map.
- Scoring: Ranked by the weighted mean of their ranks across all functions (lower is better).
FAQ
What functions are used?
Does this predict Neural Network performance?
How are the rankings calculated?
References
-
Virtual Library of Simulation Experiments: Test Functions and Datasets for Optimization Algorithms.
Source: Simon Fraser University. Curated by Derek Bingham.
https://www.sfu.ca/~ssurjano/optimization.html -
Kim, H. (2021). pytorch_optimizer: optimizer & lr scheduler & loss function collections in PyTorch (Version 2.12.0) [Computer software].
https://github.com/kozistr/pytorch_optimizer