Skip to content

TST: run tests on CPU+GPU #221

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Apr 15, 2025
Merged

TST: run tests on CPU+GPU #221

merged 5 commits into from
Apr 15, 2025

Conversation

crusaderky
Copy link
Contributor

@crusaderky crusaderky commented Apr 8, 2025

@@ -298,7 +300,7 @@ def _op(
and idx.dtype == xp.bool
and idx.shape == x.shape
):
y_xp = xp.asarray(y, dtype=x.dtype)
y_xp = xp.asarray(y, dtype=x.dtype, device=_compat.device(x))
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Untested fix, which only benefits eager JAX.
On jax.jit, device propagation fails due to jax-ml/jax#26000

In a follow-up PR I'll rework the test_device tests to align them to the pattern recently established in scipy.

@lucascolley lucascolley added this to the 0.7.2 milestone Apr 8, 2025
DASK = "dask.array", _compat.is_dask_namespace
SPARSE = "sparse", _compat.is_pydata_sparse_namespace
JAX = "jax.numpy", _compat.is_jax_namespace
JAX_GPU = "jax.numpy:gpu", _compat.is_jax_namespace
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As an aside, I profoundly dislike that this enum is used in production by the dispatch mechanism.
@lucascolley would it be OK if I make it test-only again and just use is_*_namespace in dispatch?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I profoundly dislike that this enum is used in production by the dispatch mechanism.

what motivates your dislike?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This enum contains a lot of "backends" that are just variant duplicates, which is an artifact of using this enum to parametrize the xp fixture. Which is a decent hack for pytest, but which doesn't make any sense in the context of the dispatch system.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

okay, I see the problem. Happy to go with whatever works best for you. FWIW I would like to keep using an enum in the dispatch mechanism, but fair enough if it is too messy to maintain the like mapping.

@lucascolley
Copy link
Member

(heads up, merge conflicts on the lock file will keep cropping up while renovate is spinning up, feel free to ignore)

@lucascolley
Copy link
Member

the renovate traffic should have quietened down from now on

Copy link
Member

@lucascolley lucascolley left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks Guido, LGTM!

I'll merge once scientific-python/specs#380 is resolved...

@lucascolley
Copy link
Member

@rgommers please could you disable requiring signed commits on this repo, when you have time? Unfortunately, the advice from scientific-python/specs#380 didn't play out in practice.

@rgommers
Copy link
Member

sure, done

@lucascolley lucascolley merged commit d3f6f67 into data-apis:main Apr 15, 2025
10 checks passed
@crusaderky crusaderky deleted the gpu branch April 15, 2025 12:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants