PyTorch Optimizer
Benchmark

Trajectory visualization and evaluation of optimization algorithms
on standard 2D mathematical test functions.

github.com/ Star on GitHub
Tuning Engine Optuna Test Suite 15 Functions Library pytorch_optimizer

Loading benchmark data...

Rank Optimizer Score Visuals

Test Functions

Methodology

This project benchmarks algorithms available in the pytorch_optimizer library. The evaluation process consists of:

  • Hyperparameter Tuning: Used Optuna to search for optimal parameters (LR, momentum, etc.) over a fixed number of trials.
  • Execution: The optimizer is run using the best hyperparameters found.
  • Visualization: The trajectory is recorded and plotted on the contour map.
  • Scoring: Ranked by the weighted mean of their ranks across all functions (lower is better).

FAQ

What functions are used?
The suite uses 15 standard optimization test functions, including multimodal functions (e.g., Ackley) and valley-shaped functions (e.g., Rosenbrock).
Does this predict Neural Network performance?
No. These are low-dimensional (2D) problems. Deep Learning involves high-dimensional optimization with different landscapes (saddle points). This strictly evaluates algorithmic behavior in a 2D context.
How are the rankings calculated?
Score is the weighted mean of the optimizer's rank across all test functions. A score of 1.0 means the optimizer was #1 on every function (weighted).

References