Skip to content

Conversation

@cchung100m
Copy link
Contributor

@cchung100m cchung100m commented Oct 25, 2025

This PR is trying to fix issues #18362.

Solution:

For now, return zeros with the correct shape since random initialization randn is mainly used during training, not inference. This is a temporary solution until Relax adds proper random number generation support.

Future Enhancement:

A proper implementation would require adding:

  • A relax.op.random_normal(shape, mean=0, std=1, dtype, seed) operator
  • TIR lowering rules for CPU/GPU backends
  • State management for reproducible random sequences

@cchung100m cchung100m force-pushed the issue-18362 branch 8 times, most recently from ed0fb06 to 425e78f Compare November 1, 2025 09:44
@cchung100m cchung100m marked this pull request as ready for review December 30, 2025 11:26
@cchung100m
Copy link
Contributor Author

Hi @tlopex @mshr-h

Any suggestions would be appreciated if you are available.

dtype = self._convert_data_type(
node.kwargs.get("dtype", torch.get_default_dtype()), self.env
)
return self.block_builder.emit(relax.op.zeros(node.args[0], dtype))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you add a TODO note here indicates that we still need to implement it properly?

@mshr-h
Copy link
Contributor

mshr-h commented Jan 4, 2026

I don't know if we really need a temp support for rand. As we added custom op converter in #18544 , users can now define randn conveter in their codebase instead of editing the tvm torch frontend files.

@cchung100m
Copy link
Contributor Author

Hi @tlopex @mshr-h
Thanks for the prompt reply. I have no other thoughts for #18362 if users can now define randn conveter by custom op converter. I can close this PR if no further actions needed.

@cchung100m cchung100m marked this pull request as draft January 4, 2026 15:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants