Skip to content

feat: [linalg] add iterative solvers #994

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 47 commits into
base: master
Choose a base branch
from

Conversation

jalvesz
Copy link
Contributor

@jalvesz jalvesz commented May 17, 2025

This daft PR introduces the base for iterative solvers.

Two methods are proposed:

  • Conjugate Gradient (CG)
  • Preconditioned Conjugate Gradient (PCG)

Each method is made public with two public interface flavors:

  • solve_<method>_kernel: All arguments are mandatory (no optionals/no internal allocations), it contains the methods steps. The linear system (and preconditioner) is defined through a public DT linop which enables to extend two key procedures: apply equivalent to a matrix-vector product and inner equivalent to the dot_product. This is the interface recommended to extend the method when dealing with a matrix type not available in stdlib or when working in distributed-memory frameworks for which the matvec, dot_product and factorization steps need to be adapted to account for parallel synchronization.
  • solve_<method>: API with optional arguments, the linear system can be defined with dense or CSR_<>_type matrices. For the PCG, the following preconditioners are available: none (identity), jacobi (1/diagonal). It internally uses solve_<method>_kernel.
  • tests
  • examples
  • documentation

jalvesz and others added 30 commits March 2, 2025 22:26
@jvdp1 jvdp1 marked this pull request as ready for review May 18, 2025 08:52
@jvdp1
Copy link
Member

jvdp1 commented May 18, 2025

Thank you @jalvesz . Nice approach.

@jalvesz jalvesz changed the title feat: [linalg] add iterative solvers (CG and PCCG) feat: [linalg] add iterative solvers May 18, 2025
Copy link
Member

@jvdp1 jvdp1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @jalvesz . Interesting approach that is very flexible. In most of my practical cases, the coefficient matrix cannot be computed explicitely, even if very sparse. Your DT approach will still allow me to use thses solvers.
I suggest to split this PR in two: 1 for the iterative solvers (ldlt, ssor,...) and 1 for the CG solvers.
Regarding the preconditioners, I think that proposing only identity/none and diagonal is good enough for a first PR. Additional preconditioners can be provided in following PRs.
I think that such an approach will help to review this PR.

#:for k, t, s in R_KINDS_TYPES
type, public :: solver_workspace_${s}$
${t}$, allocatable :: tmp(:,:)
procedure(logger_sub_${s}$), pointer, nopass :: callback => null()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could the default logger the logger provided by stdlib?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, I just wasn't sure what would be a good default logger, do you have a proposal?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, looking again to the code, I think that your approach will provide full flexibility to the user.


`x`: 1-D array of `real(<kind>)` which serves as the input initial guess and the output solution. This argument is `intent(inout)`.

`tol`: scalar of type `real(<kind>)` specifying the requested tolerance with respect to the relative residual vector norm. This argument is `intent(in)`.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tolerance with respect to the relative residual vector norm:
Maybe for a follow-up PR: should we support different convergence criteria?

Copy link
Contributor Author

@jalvesz jalvesz May 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Which criteria would you consider adding?
I guess a monitoring on the solution vector instead of the residual could also be considered. something like

$$|| x^i - x^{i-1} ||< tol * ||x^0||$$

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, indeed. Here are a couple of others: https://doi.org/10.1186/s12711-021-00626-1 , related to the relative error in x.

!! The `linop` type is used to define the linear operator for the iterative solvers.
#:for k, t, s in R_KINDS_TYPES
type, public :: linop_${s}$_type
procedure(vector_sub_${s}$), nopass, pointer :: apply => null()
Copy link
Contributor Author

@jalvesz jalvesz May 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thinking about other methods to add, it might be necessary to cover also applying an in-place conjugate transpose. Two ideas to enable this:

  1. the apply procedure takes as input also an "operation" character(1) argument > ('N','T','H')
  2. a secondary procedure pointer is added such as apply_transpose

?
thought, there are variants that do not need the transpose, so not necessarily a must. For instance, instead of implementing BiCG, the BiCG-Stab method doesn't require applying the transpose. GMRES doesn't need it either...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants