Features & Benchmark Functions
-
Benchmarks a wide range of optimizers from the
pytorch_optimizer
library. - Performs automated hyperparameter tuning using Optuna.
- Generates detailed trajectory visualizations for each optimizer and function pair.
- Presents performance rankings.
-
Is fully configurable via a
config.toml
file.
Evaluated on Standard 2D Test Functions
The optimizers are benchmarked on a diverse set of mathematical functions to test their performance in various landscapes. Click on a function's name to learn more about it.
📊 Optimizer Performance Rankings
Optimizers are ranked by their average performance across all benchmark functions. The interactive tables below allow you to compare results and view detailed trajectory visualizations for each optimization algorithm, providing insight into their convergence properties.
Download Latest Benchmark Results 📦
Rank | Optimizer | Average Rank | Visualization |
---|
Rank | Optimizer | Avg Error Rate | Visualization |
---|
🚀 Getting Started
# Clone repository
git clone --depth 1 https://github.com/AidinHamedi/Optimizer-Benchmark.git
cd Optimizer-Benchmark
# Install dependencies
uv sync
# Run the benchmark
python runner.py
🤝 Contributing
Contributions are welcome! If you would like to contribute, please feel free to submit a pull request or open an issue to discuss your ideas.
📚 References
- Virtual Library of Simulation Experiments: Test Functions and Datasets for Optimization Algorithms. Source: Simon Fraser University https://www.sfu.ca/~ssurjano/optimization.html Curated by Derek Bingham — For inquiries: dbingham@stat.sfu.ca
- Kim, H. (2021). pytorch_optimizer: optimizer & lr scheduler & loss function collections in PyTorch (Version 2.12.0) [Computer software]. https://github.com/kozistr/pytorch_optimizer