Open
Description
I was trying to fit an ML error model with 71' observations, but memory use grew very quickly. This may be because some kind of parallelization comes into play (it shouldn't), but it feels more like the weights matrix going dense. The messages before I killed the process were:
> py_mlerror <- spr$ML_Error(y, X, w=nb_q0, method="LU")
/usr/lib64/python3.9/site-packages/scipy/optimize/_minimize.py:779: RuntimeWarning: Method 'bounded' does not support relative tolerance in x; defaulting to absolute tolerance.
warn("Method 'bounded' does not support relative tolerance in x; "
/usr/lib64/python3.9/site-packages/scipy/sparse/_index.py:125: SparseEfficiencyWarning: Changing the sparsity structure of a csr_matrix is expensive. lil_matrix is more efficient.
self._set_arrayXarray(i, j, x)
/usr/lib64/python3.9/site-packages/scipy/sparse/linalg/dsolve/linsolve.py:318: SparseEfficiencyWarning: splu requires CSC matrix format
warn('splu requires CSC matrix format', SparseEfficiencyWarning)
/usr/lib64/python3.9/site-packages/scipy/sparse/linalg/dsolve/linsolve.py:215: SparseEfficiencyWarning: spsolve is more efficient when sparse b is in the CSC matrix format
warn('spsolve is more efficient when sparse b '
I think the sparse weights matrix is CSR not CSC. Is the problem in the densifying of the variance covariance matrix? About line
Line 244 in c6d97c1
spinv()
go dense on return?Metadata
Metadata
Assignees
Labels
No labels