Skip to content

method='LU' memory use in spreg.ML_Error()? #82

Open
@rsbivand

Description

@rsbivand

I was trying to fit an ML error model with 71' observations, but memory use grew very quickly. This may be because some kind of parallelization comes into play (it shouldn't), but it feels more like the weights matrix going dense. The messages before I killed the process were:

> py_mlerror <- spr$ML_Error(y, X, w=nb_q0, method="LU")
/usr/lib64/python3.9/site-packages/scipy/optimize/_minimize.py:779: RuntimeWarning: Method 'bounded' does not support relative tolerance in x; defaulting to absolute tolerance.
  warn("Method 'bounded' does not support relative tolerance in x; "
/usr/lib64/python3.9/site-packages/scipy/sparse/_index.py:125: SparseEfficiencyWarning: Changing the sparsity structure of a csr_matrix is expensive. lil_matrix is more efficient.
  self._set_arrayXarray(i, j, x)
/usr/lib64/python3.9/site-packages/scipy/sparse/linalg/dsolve/linsolve.py:318: SparseEfficiencyWarning: splu requires CSC matrix format
  warn('splu requires CSC matrix format', SparseEfficiencyWarning)
/usr/lib64/python3.9/site-packages/scipy/sparse/linalg/dsolve/linsolve.py:215: SparseEfficiencyWarning: spsolve is more efficient when sparse b is in the CSC matrix format
  warn('spsolve is more efficient when sparse b '

I think the sparse weights matrix is CSR not CSC. Is the problem in the densifying of the variance covariance matrix? About line

a = -self.lam * W
? Could a finite difference Hessian help? Does spinv() go dense on return?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions