Skip to content

Add L-BFGS optimizer from pyensmallen #566

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 19 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 9 additions & 1 deletion .github/workflows/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -34,10 +34,18 @@ jobs:
cache-environment: true
create-args: |
python=${{ matrix.python-version }}
- name: run pytest
- name: run pytest (for python 3.13)
shell: bash -l {0}
if: runner.os == 'Linux' && matrix.python-version == '3.13'
run: |
micromamba activate optimagic
pytest --cov-report=xml --cov=./
- name: run pytest (for python < 3.13 with pip install pyensmallen)
shell: bash -l {0}
if: runner.os == 'Linux' && matrix.python-version < '3.13'
run: |
micromamba activate optimagic
pip install pyensmallen
pytest --cov-report=xml --cov=./
- name: Upload coverage report.
if: runner.os == 'Linux' && matrix.python-version == '3.10'
Expand Down
36 changes: 35 additions & 1 deletion docs/source/algorithms.md
Original file line number Diff line number Diff line change
Expand Up @@ -3936,6 +3936,39 @@ addition to optimagic when using an NLOPT algorithm. To install nlopt run
10 * (number of parameters + 1).
```

## Optimizers from the Ensmallen C++ library

optimagic supports some optimizers from the Ensmallen C++ library. Optimizers from this
library are made available in Python through the pyensmallen python wrapper. To use
optimizers from Ensmallen, you need to have
[pyensmallen](https://pypi.org/project/pyensmallen-experimental/) installed (pip install
pyensmallen_experimental).

````{eval-rst}
.. dropdown:: ensmallen_lbfgs

.. code-block::

"ensmallen_lbfgs"

Minimize a scalar function using the “LBFGS” algorithm.

L-BFGS is an optimization algorithm in the family of quasi-Newton methods that approximates the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm using a limited amount of computer memory.

Detailed description of the algorithm is given in :cite:`Matthies1979`.

- **limited_memory_storage_length** (int): Maximum number of saved gradients used to approximate the hessian matrix..
- **stopping.maxiter** (int): Maximum number of iterations for the optimization (0 means no limit and may run indefinitely).
- **armijo_constant** (float): Controls the accuracy of the line search routine for determining the Armijo condition. default is 1e-4.
- **wolfe_condition** (float): Parameter for detecting the Wolfe condition. default is 0.9.
- **convergence.gtol_abs** (float): Stop when the absolute gradient norm is smaller than this.
- **convergence.ftol_rel** (float): Stop when the relative improvement between two iterations is below this.
- **max_line_search_trials** (int): The maximum number of trials for the line search (before giving up). default is 50.
- **min_step_for_line_search** (float): The minimum step of the line search. default is 1e-20.
- **max_step_for_line_search** (float): The maximum step of the line search. default is 1e20.



## Optimizers from iminuit

optimagic supports the [IMINUIT MIGRAD Optimizer](https://iminuit.readthedocs.io/). To
Expand Down Expand Up @@ -3982,7 +4015,7 @@ iminuit).

- A value of 1 (the default) indicates that the optimizer will only run once, disabling the restart feature.
- Values greater than 1 specify the maximum number of restart attempts.
```
````

(nevergrad-algorithms)=

Expand Down Expand Up @@ -4041,6 +4074,7 @@ these optimizers, you need to have
for speed. Default is False.
- **special_speed_quasi_opp_init** (bool): Whether to use special quasi-opposition
initialization for speed. Default is False.

```

## References
Expand Down
12 changes: 12 additions & 0 deletions docs/source/refs.bib
Original file line number Diff line number Diff line change
Expand Up @@ -893,6 +893,18 @@ @book{Conn2009
URL = {https://epubs.siam.org/doi/abs/10.1137/1.9780898718768},
}


@article{Matthies1979,
author = {H. Matthies and G. Strang},
title = {The Solution of Nonlinear Finite Element Equations},
journal = {International Journal for Numerical Methods in Engineering},
volume = {14},
number = {11},
pages = {1613-1626},
year = {1979},
doi = {10.1002/nme.1620141104}
}

@article{JAMES1975343,
title = {Minuit - a system for function minimization and analysis of the parameter errors and correlations},
journal = {Computer Physics Communications},
Expand Down
1 change: 1 addition & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -335,6 +335,7 @@ ignore_errors = true

[[tool.mypy.overrides]]
module = [
"pyensmallen_experimental",
"pybaum",
"scipy",
"scipy.linalg",
Expand Down
9 changes: 9 additions & 0 deletions src/optimagic/algorithms.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,7 @@
NloptVAR,
)
from optimagic.optimizers.pounders import Pounders
from optimagic.optimizers.pyensmallen_optimizers import EnsmallenLBFGS
from optimagic.optimizers.pygmo_optimizers import (
PygmoBeeColony,
PygmoCmaes,
Expand Down Expand Up @@ -907,6 +908,7 @@ class GradientBasedLocalScalarAlgorithms(AlgoSelection):
nlopt_slsqp: Type[NloptSLSQP] = NloptSLSQP
nlopt_tnewton: Type[NloptTNewton] = NloptTNewton
nlopt_var: Type[NloptVAR] = NloptVAR
ensmallen_lbfgs: Type[EnsmallenLBFGS] = EnsmallenLBFGS
scipy_bfgs: Type[ScipyBFGS] = ScipyBFGS
scipy_conjugate_gradient: Type[ScipyConjugateGradient] = ScipyConjugateGradient
scipy_lbfgsb: Type[ScipyLBFGSB] = ScipyLBFGSB
Expand Down Expand Up @@ -1974,6 +1976,7 @@ class GradientBasedLocalAlgorithms(AlgoSelection):
nlopt_slsqp: Type[NloptSLSQP] = NloptSLSQP
nlopt_tnewton: Type[NloptTNewton] = NloptTNewton
nlopt_var: Type[NloptVAR] = NloptVAR
ensmallen_lbfgs: Type[EnsmallenLBFGS] = EnsmallenLBFGS
scipy_bfgs: Type[ScipyBFGS] = ScipyBFGS
scipy_conjugate_gradient: Type[ScipyConjugateGradient] = ScipyConjugateGradient
scipy_lbfgsb: Type[ScipyLBFGSB] = ScipyLBFGSB
Expand Down Expand Up @@ -2087,6 +2090,7 @@ class GradientBasedScalarAlgorithms(AlgoSelection):
nlopt_slsqp: Type[NloptSLSQP] = NloptSLSQP
nlopt_tnewton: Type[NloptTNewton] = NloptTNewton
nlopt_var: Type[NloptVAR] = NloptVAR
ensmallen_lbfgs: Type[EnsmallenLBFGS] = EnsmallenLBFGS
scipy_bfgs: Type[ScipyBFGS] = ScipyBFGS
scipy_basinhopping: Type[ScipyBasinhopping] = ScipyBasinhopping
scipy_conjugate_gradient: Type[ScipyConjugateGradient] = ScipyConjugateGradient
Expand Down Expand Up @@ -2709,6 +2713,7 @@ class LocalScalarAlgorithms(AlgoSelection):
nlopt_sbplx: Type[NloptSbplx] = NloptSbplx
nlopt_tnewton: Type[NloptTNewton] = NloptTNewton
nlopt_var: Type[NloptVAR] = NloptVAR
ensmallen_lbfgs: Type[EnsmallenLBFGS] = EnsmallenLBFGS
scipy_bfgs: Type[ScipyBFGS] = ScipyBFGS
scipy_cobyla: Type[ScipyCOBYLA] = ScipyCOBYLA
scipy_conjugate_gradient: Type[ScipyConjugateGradient] = ScipyConjugateGradient
Expand Down Expand Up @@ -3110,6 +3115,7 @@ class GradientBasedAlgorithms(AlgoSelection):
nlopt_slsqp: Type[NloptSLSQP] = NloptSLSQP
nlopt_tnewton: Type[NloptTNewton] = NloptTNewton
nlopt_var: Type[NloptVAR] = NloptVAR
ensmallen_lbfgs: Type[EnsmallenLBFGS] = EnsmallenLBFGS
scipy_bfgs: Type[ScipyBFGS] = ScipyBFGS
scipy_basinhopping: Type[ScipyBasinhopping] = ScipyBasinhopping
scipy_conjugate_gradient: Type[ScipyConjugateGradient] = ScipyConjugateGradient
Expand Down Expand Up @@ -3306,6 +3312,7 @@ class LocalAlgorithms(AlgoSelection):
nlopt_tnewton: Type[NloptTNewton] = NloptTNewton
nlopt_var: Type[NloptVAR] = NloptVAR
pounders: Type[Pounders] = Pounders
ensmallen_lbfgs: Type[EnsmallenLBFGS] = EnsmallenLBFGS
scipy_bfgs: Type[ScipyBFGS] = ScipyBFGS
scipy_cobyla: Type[ScipyCOBYLA] = ScipyCOBYLA
scipy_conjugate_gradient: Type[ScipyConjugateGradient] = ScipyConjugateGradient
Expand Down Expand Up @@ -3517,6 +3524,7 @@ class ScalarAlgorithms(AlgoSelection):
nlopt_sbplx: Type[NloptSbplx] = NloptSbplx
nlopt_tnewton: Type[NloptTNewton] = NloptTNewton
nlopt_var: Type[NloptVAR] = NloptVAR
ensmallen_lbfgs: Type[EnsmallenLBFGS] = EnsmallenLBFGS
pygmo_bee_colony: Type[PygmoBeeColony] = PygmoBeeColony
pygmo_cmaes: Type[PygmoCmaes] = PygmoCmaes
pygmo_compass_search: Type[PygmoCompassSearch] = PygmoCompassSearch
Expand Down Expand Up @@ -3696,6 +3704,7 @@ class Algorithms(AlgoSelection):
nlopt_tnewton: Type[NloptTNewton] = NloptTNewton
nlopt_var: Type[NloptVAR] = NloptVAR
pounders: Type[Pounders] = Pounders
ensmallen_lbfgs: Type[EnsmallenLBFGS] = EnsmallenLBFGS
pygmo_bee_colony: Type[PygmoBeeColony] = PygmoBeeColony
pygmo_cmaes: Type[PygmoCmaes] = PygmoCmaes
pygmo_compass_search: Type[PygmoCompassSearch] = PygmoCompassSearch
Expand Down
6 changes: 6 additions & 0 deletions src/optimagic/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -91,6 +91,12 @@
else:
IS_NUMBA_INSTALLED = True

try:
import pyensmallen_experimental # noqa: F401
except ImportError:
IS_PYENSMALLEN_INSTALLED = False
else:
IS_PYENSMALLEN_INSTALLED = True

Check warning on line 99 in src/optimagic/config.py

View check run for this annotation

Codecov / codecov/patch

src/optimagic/config.py#L99

Added line #L99 was not covered by tests

try:
import iminuit # noqa: F401
Expand Down
108 changes: 108 additions & 0 deletions src/optimagic/optimizers/pyensmallen_optimizers.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,108 @@
"""Implement ensmallen optimizers."""

from dataclasses import dataclass
from typing import Any

import numpy as np
from numpy.typing import NDArray

from optimagic import mark
from optimagic.config import IS_PYENSMALLEN_INSTALLED
from optimagic.optimization.algo_options import (
CONVERGENCE_FTOL_REL,
CONVERGENCE_GTOL_ABS,
LIMITED_MEMORY_STORAGE_LENGTH,
MAX_LINE_SEARCH_STEPS,
STOPPING_MAXITER,
)
from optimagic.optimization.algorithm import Algorithm, InternalOptimizeResult
from optimagic.optimization.internal_optimization_problem import (
InternalOptimizationProblem,
)
from optimagic.typing import AggregationLevel, NonNegativeFloat, PositiveInt

# use pyensmallen_experimental for testing purpose
if IS_PYENSMALLEN_INSTALLED:
import pyensmallen_experimental as pye

Check warning on line 26 in src/optimagic/optimizers/pyensmallen_optimizers.py

View check run for this annotation

Codecov / codecov/patch

src/optimagic/optimizers/pyensmallen_optimizers.py#L26

Added line #L26 was not covered by tests

MIN_LINE_SEARCH_STEPS = 1e-20
"""The minimum step of the line search."""
MAX_LINE_SEARCH_TRIALS = 50
"""The maximum number of trials for the line search (before giving up)."""
ARMIJO_CONSTANT = 1e-4
"""Controls the accuracy of the line search routine for determining the Armijo
condition."""
WOLFE_CONDITION = 0.9
"""Parameter for detecting the Wolfe condition."""

STEP_SIZE = 0.001
"""Step size for each iteration."""
BATCH_SIZE = 32
"""Step size for each iteration."""
EXP_DECAY_RATE_FOR_FIRST_MOMENT = 0.9
"""Exponential decay rate for the first moment estimates."""
EXP_DECAY_RATE_FOR_WEIGHTED_INF_NORM = 0.999
"""Exponential decay rate for the first moment estimates."""


@mark.minimizer(
name="ensmallen_lbfgs",
solver_type=AggregationLevel.SCALAR,
is_available=IS_PYENSMALLEN_INSTALLED,
is_global=False,
needs_jac=True,
needs_hess=False,
supports_parallelism=False,
supports_bounds=False,
supports_linear_constraints=False,
supports_nonlinear_constraints=False,
disable_history=False,
)
@dataclass(frozen=True)
class EnsmallenLBFGS(Algorithm):
limited_memory_storage_length: PositiveInt = LIMITED_MEMORY_STORAGE_LENGTH
stopping_maxiter: PositiveInt = STOPPING_MAXITER
armijo_constant: NonNegativeFloat = ARMIJO_CONSTANT # needs review
wolfe_condition: NonNegativeFloat = WOLFE_CONDITION # needs review
convergence_gtol_abs: NonNegativeFloat = CONVERGENCE_GTOL_ABS
convergence_ftol_rel: NonNegativeFloat = CONVERGENCE_FTOL_REL
max_line_search_trials: PositiveInt = MAX_LINE_SEARCH_TRIALS
min_step_for_line_search: NonNegativeFloat = MIN_LINE_SEARCH_STEPS
max_step_for_line_search: NonNegativeFloat = MAX_LINE_SEARCH_STEPS

def _solve_internal_problem(
self, problem: InternalOptimizationProblem, x0: NDArray[np.float64]
) -> InternalOptimizeResult:
optimizer = pye.L_BFGS(

Check warning on line 76 in src/optimagic/optimizers/pyensmallen_optimizers.py

View check run for this annotation

Codecov / codecov/patch

src/optimagic/optimizers/pyensmallen_optimizers.py#L76

Added line #L76 was not covered by tests
numBasis=self.limited_memory_storage_length,
maxIterations=self.stopping_maxiter,
armijoConstant=self.armijo_constant,
wolfe=self.wolfe_condition,
minGradientNorm=self.convergence_gtol_abs,
factr=self.convergence_ftol_rel,
maxLineSearchTrials=self.max_line_search_trials,
minStep=self.min_step_for_line_search,
maxStep=self.max_step_for_line_search,
)

def objective_function(

Check warning on line 88 in src/optimagic/optimizers/pyensmallen_optimizers.py

View check run for this annotation

Codecov / codecov/patch

src/optimagic/optimizers/pyensmallen_optimizers.py#L88

Added line #L88 was not covered by tests
x: NDArray[np.float64], grad: NDArray[np.float64]
) -> np.float64:
fun, jac = problem.fun_and_jac(x)
grad[:] = jac
return np.float64(fun)

Check warning on line 93 in src/optimagic/optimizers/pyensmallen_optimizers.py

View check run for this annotation

Codecov / codecov/patch

src/optimagic/optimizers/pyensmallen_optimizers.py#L91-L93

Added lines #L91 - L93 were not covered by tests

# Passing a Report class to the optimizer allows us to retrieve additional info
ens_res: dict[str, Any] = dict()
report = pye.Report(resultIn=ens_res, disableOutput=True)
best_x = optimizer.optimize(objective_function, x0, report)

Check warning on line 98 in src/optimagic/optimizers/pyensmallen_optimizers.py

View check run for this annotation

Codecov / codecov/patch

src/optimagic/optimizers/pyensmallen_optimizers.py#L96-L98

Added lines #L96 - L98 were not covered by tests

res = InternalOptimizeResult(

Check warning on line 100 in src/optimagic/optimizers/pyensmallen_optimizers.py

View check run for this annotation

Codecov / codecov/patch

src/optimagic/optimizers/pyensmallen_optimizers.py#L100

Added line #L100 was not covered by tests
x=best_x,
fun=ens_res["objective_value"],
n_iterations=ens_res["iterations"],
n_fun_evals=ens_res["evaluate_calls"],
n_jac_evals=ens_res["gradient_calls"],
)

return res

Check warning on line 108 in src/optimagic/optimizers/pyensmallen_optimizers.py

View check run for this annotation

Codecov / codecov/patch

src/optimagic/optimizers/pyensmallen_optimizers.py#L108

Added line #L108 was not covered by tests
22 changes: 22 additions & 0 deletions tests/optimagic/optimizers/test_pyensmallen_optimizers.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
"""Tests for pyensmallen optimizers."""

import numpy as np
import pytest

import optimagic as om
from optimagic.config import IS_PYENSMALLEN_INSTALLED
from optimagic.optimization.optimize import minimize


@pytest.mark.skipif(not IS_PYENSMALLEN_INSTALLED, reason="pyensmallen not installed.")
def test_stop_after_one_iteration():
algo = om.algos.ensmallen_lbfgs(stopping_maxiter=1)
expected = np.array([0, 0.81742581, 1.63485163, 2.45227744, 3.26970326])
res = minimize(
fun=lambda x: x @ x,
fun_and_jac=lambda x: (x @ x, 2 * x),
params=np.arange(5),
algorithm=algo,
)

assert np.allclose(res.x, expected)
Loading