-
-
Notifications
You must be signed in to change notification settings - Fork 45.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Avoid log(0) in KL divergence #12237
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what do you actually want to change ?
To be precise, I want to change the kullback_leibler_divergence method to fix the bug that the final return value is INF when the numerator or denominator of np.log(y_true / y_pred) is zero. The loss of precision after the change will not affect the calculation of machine learning model parameters. However, strictly following the latest contribution guidelines, I need to fix a type error in the previous version before I can submit it. |
@bz-e I think we do not intend to change the default behavior when all y_true are nonzero. |
I meant you could do something like the following (need not be identical, just for demo), and the complexity stays linear. mask = y_true != 0
y_true_filtered = y_true[mask]
y_pred_filtered = y_pred[mask]
kl_loss = y_true_filtered * np.log(y_true_filtered / y_pred_filtered) |
For educational purposes I think you are right, this is the method with the least changes. |
…denominator and added a test case
Describe your change:
Fixes #12233
Added type NONE to make it pass type checking, and added a small constant to the kullback_leibler_divergence method to fix the bug of numerator and denominator being 0, and also added a test case.
Checklist: