Skip to content

Add mechanism for randomizing test parametrizations  #649

Open
@Time0o

Description

@Time0o

Description

There is an older issue which sort of goes in this direction: #75 which the final word being that this does not fall in the scope of this plugin.

I am not a frequent pytest user but it seems to me that the following is both a valid and common use case that should be supported by this plugin (rather than yet another randomization mechanism):

Let's say I have a reference implementation of a function f and another implementation (maybe using a more optimized algorithm or similar) g for which I want to assert that its behavior is that same as fs. So I could write a test like:

@pytest.mark.parametrize(a, [1,2,3])
@pytest.mark.parametrize(b, [1,2,3])
def test_g_implements_f(a, b):
    assert g(a, b) == f(a, b)

All well and good, but what if the space of valid parameter combinations is very large and f is blackbox-y enough that I can't say exactly what all of its corner cases are? Then I would like to randomly sample the entire parameter space. That probably goes beyond the scope of pytest-randomly. So I would write my own decorator along the lines of:

@parametrize_random([
        (a, list(range(1000))),
        (b, list(range(1000))),
    ],
    samples=100,
)
def test_g_implements_f(a, b):
    assert g(a, b) == f(a, b)

That is better than e.g. generating random parameter inside a for loop inside the test because then all parametrizations can run independently. Now I would most likely want to use the same random seeding mechanism pytest-randomly uses to seed at the start of every test inside parametrize_random and there seems to be no easy way to do that. Should there be or is there a better solution to this?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions