-
Notifications
You must be signed in to change notification settings - Fork 10
TST: run tests on CPU+GPU #221
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
@@ -298,7 +300,7 @@ def _op( | |||
and idx.dtype == xp.bool | |||
and idx.shape == x.shape | |||
): | |||
y_xp = xp.asarray(y, dtype=x.dtype) | |||
y_xp = xp.asarray(y, dtype=x.dtype, device=_compat.device(x)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Untested fix, which only benefits eager JAX.
On jax.jit, device propagation fails due to jax-ml/jax#26000
In a follow-up PR I'll rework the test_device
tests to align them to the pattern recently established in scipy.
DASK = "dask.array", _compat.is_dask_namespace | ||
SPARSE = "sparse", _compat.is_pydata_sparse_namespace | ||
JAX = "jax.numpy", _compat.is_jax_namespace | ||
JAX_GPU = "jax.numpy:gpu", _compat.is_jax_namespace |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As an aside, I profoundly dislike that this enum is used in production by the dispatch mechanism.
@lucascolley would it be OK if I make it test-only again and just use is_*_namespace
in dispatch?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I profoundly dislike that this enum is used in production by the dispatch mechanism.
what motivates your dislike?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This enum contains a lot of "backends" that are just variant duplicates, which is an artifact of using this enum to parametrize the xp
fixture. Which is a decent hack for pytest, but which doesn't make any sense in the context of the dispatch system.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
okay, I see the problem. Happy to go with whatever works best for you. FWIW I would like to keep using an enum in the dispatch mechanism, but fair enough if it is too messy to maintain the like
mapping.
(heads up, merge conflicts on the lock file will keep cropping up while renovate is spinning up, feel free to ignore) |
the renovate traffic should have quietened down from now on |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks Guido, LGTM!
I'll merge once scientific-python/specs#380 is resolved...
@rgommers please could you disable requiring signed commits on this repo, when you have time? Unfortunately, the advice from scientific-python/specs#380 didn't play out in practice. |
sure, done |
tests-cuda
anddev-cuda
pixi environments, run all tests first on CPU and then on GPU:xpx.at
__setitem__
with bool mask and dtype mismatch fails pytorch/pytorch#150017