Skip to content

OptimizationODE package for ODE solvers #916

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 36 commits into from
May 31, 2025

Conversation

ParasPuneetSingh
Copy link
Collaborator

Checklist

  • Appropriate tests were added
  • Any code changes were done in a way that does not break public API
  • All documentation related to code changes were updated
  • The new code follows the
    contributor guidelines, in particular the SciML Style Guide and
    COLPRAC.
  • Any new documentation only uses public API

Additional context

This is a package for developing ODE solvers for Optimization with methods like Gradient Descent, Chebyshev stabilization and the Runge-kutta acceleration.

OptimizationODE.jl added and package created for the same.
Added basic tests for the gradient descent code in OptimizationODE.jl
Project.toml for the initial OptimizationODE.jl code.
ChrisRackauckas and others added 6 commits May 24, 2025 08:12
Added algorithm functionality along with helper functions for:

ODEGradientDescent, RKChebyshevDescent, RKAccelerated, PRKChebyshevDescent.
Updated tests for the new functions and AD test for ODEGradientDescent.
Updated code for the 4 algs, removing redundant code.
Updated tests using the ODEOptimizer wrapper.
added OrdinaryDiffEq for reusing ODE solvers.
Updated struct and constructors, the ODEOptimizer now takes type instead of instance.
updated tests.
Add callback, progress, struct accepts type instead of insatance, maxiters passed.
Tests match the latest ODE code with callbacks and maxiters.


function SciMLBase.__init(prob::OptimizationProblem, opt::ODEOptimizer, data=Optimization.DEFAULT_DATA;
η=0.1, dt=nothing, tmax=100.0, callback=Optimization.DEFAULT_CALLBACK, progress=false,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
η=0.1, dt=nothing, tmax=100.0, callback=Optimization.DEFAULT_CALLBACK, progress=false,
dt=nothing, tmax=100.0, callback=Optimization.DEFAULT_CALLBACK, progress=false,


η = get(cache.solver_args, :η, 0.1)
dt = get(cache.solver_args, :dt, nothing)
tmax = get(cache.solver_args, :tmax, 100.0)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is this set? Is that standard? That seems really odd.

@ChrisRackauckas ChrisRackauckas merged commit 0c9b2e4 into SciML:master May 31, 2025
21 of 26 checks passed
@ChrisRackauckas
Copy link
Member

Docs next

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants