PyTorch Optimizer Benchmark

This project provides an open-source framework for benchmarking PyTorch optimization algorithms. It evaluates optimizers from the pytorch_optimizer library on a suite of standard 2D mathematical test functions, using Optuna for automated hyperparameter tuning. The results include detailed performance rankings and trajectory visualizations to help analyze and compare the behavior of different algorithms.

Important Limitations: These benchmark results are based on synthetic 2D functions and may not reflect real-world performance when training neural networks. The rankings should be used as a reference, not as definitive guidance for practical applications.

Features & Benchmark Functions

  • Benchmarks a wide range of optimizers from the pytorch_optimizer library.
  • Performs automated hyperparameter tuning using Optuna.
  • Generates detailed trajectory visualizations for each optimizer and function pair.
  • Presents performance rankings.
  • Is fully configurable via a config.toml file.

Evaluated on Standard 2D Test Functions

The optimizers are benchmarked on a diverse set of mathematical functions to test their performance in various landscapes. Click on a function's name to learn more about it.

📊 Optimizer Performance Rankings

Optimizers are ranked by their average performance across all benchmark functions. The interactive tables below allow you to compare results and view detailed trajectory visualizations for each optimization algorithm, providing insight into their convergence properties.

Rank Optimizer Average Rank Visualization
Rank Optimizer Avg Error Rate Visualization

🚀 Getting Started

# Clone repository
git clone --depth 1 https://github.com/AidinHamedi/Optimizer-Benchmark.git
cd Optimizer-Benchmark

# Install dependencies
uv sync

# Run the benchmark
python runner.py

🤝 Contributing

Contributions are welcome! If you would like to contribute, please feel free to submit a pull request or open an issue to discuss your ideas.

📚 References