Skip to content

Conversation

KYash03
Copy link

@KYash03 KYash03 commented Feb 16, 2025

Fixes #6830

@jameslamb
Copy link
Collaborator

jameslamb commented Feb 16, 2025

Thanks for working on this, can you please add some tests that cover these exceptions?

@KYash03 KYash03 force-pushed the fix/forcedsplits-file-error branch from 7e35462 to 05430e5 Compare February 17, 2025 01:08
@KYash03
Copy link
Author

KYash03 commented Feb 17, 2025

@microsoft-github-policy-service agree

@KYash03 KYash03 force-pushed the fix/forcedsplits-file-error branch from 05430e5 to 133cc75 Compare February 17, 2025 01:13
@jameslamb jameslamb changed the title [gbdt] enhance error handling for forced splits file loading [c++] enhance error handling for forced splits file loading Feb 17, 2025
Copy link
Collaborator

@jameslamb jameslamb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for working on this! The general approach looks good and the error messages are informative. Nice idea thinking about "file exists but cannot be parsed" as a separate case too!

But I think this deserves some more careful consideration to be sure that we don't end up introducing a requirement on the file indicated by forcedsplits_filename also existing at scoring (prediction) time.

Comment on lines +86 to +88
if (!forced_splits_file.good()) {
Log::Warning("Forced splits file '%s' does not exist. Forced splits will be ignored.",
config->forcedsplits_filename.c_str());
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this should be a fatal error at training time... if I'm training a model and expecting specific splits to be used, I'd prefer a big loud error to a training run wasting time and compute resources only to produce a model that accidentally does not look like what I'd wanted.

HOWEVER... I think GBDT::Init() and/or GBDT::ResetConfig() will also be called when you load a model at scoring time, and at scoring time we wouldn't want to get a fatal error because of a missing or malformed file which is only supposed to affect training.

I'm not certain how to resolve that. Can you please investigate that and propose something?

It would probably be helpful to add tests for these different conditions. You can do this in Python for this purpose. Or if you don't have time / interest, I can push some tests here and then you could work on making them pass?

So to be clear, the behavior I want to see is:

  • training time:
    • forcedsplits_filename file does not exist or is not readable --> ERROR
    • forcedsplits_filename is not valid JSON --> ERROR
  • prediction / scoring time:
    • forcedsplits_filename file does not exist or is not readable --> no log output, no errors
    • forcedsplits_filename is not valid JSON --> no log output, no errors

Copy link
Author

@KYash03 KYash03 Feb 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could add a flag to the GBDT class to indicate the current mode.

This is what I was thinking:

bool is_training_ = false;

// Turn the flag on at the start of training, and off at the end.
void GBDT::Train() {
  is_training_ = true;
  // ... regular training code ...
  is_training_ = false;
}

// In Init() and ResetConfig(), handle the file as follows:
if (is_training_) {
  // Stop with an error if anything is wrong.
} else {
  // Simply continue if there are issues.
}

Regarding the tests, I'd be happy to write them!

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks very much. It is not that simple.

For example, there are many workflows where training and prediction are done in the same process, using the same Booster. So a single property is_training_ is not going to work.

There are also multiple APIs for training.

void GBDT::Train(int snapshot_freq, const std::string& model_output_path) {

bool GBDT::TrainOneIter(const score_t* gradients, const score_t* hessians) {

And we'd also want to be careful to not introduce this type of checking on every boosting round, as that would hurt performance.

Maybe @shiyu1994 could help us figure out where to put a check like this.

Also referencing this related PR to help: #5653

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if we consider force split is forbidden in inference time? I think that also tells the user that force splitting is impossible when the model has already been trained.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Introducing a flag to check for whether the model is to be used for inference or training is quite complicated. That's why I think the current solution is acceptable.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jameslamb What do you think about keeping the current changes in this PR, given the reasons above?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jameslamb What do you think about keeping the current changes in this PR, given the reasons above?

Sorry for the delay.

I think just raising a warning is an acceptable compromise... it gives users a hint to follow, and by not being a fatal error it shouldn't cause problems at inference time.

This will mean that if you train a model with forced splits, save it to a file, then load it in another environment where that file referenced by forcedsplits_filename does not exist, you'll now get a warning about this. That might be annoying for people but I think it's worth it for the benefits mentioned above.

So for this PR... I support this, but @KYash03 please added tests for the conditions I mentioned in https://github.com/microsoft/LightGBM/pull/6832/files#r1957536985 (but with the "file does not exist or is not readable" case always resulting in this warning message in logs).


@shiyu1994 @StrikerRUS in the future, do you think we should move towards forced splits being considered "data" instead of a parameter? That way, it wouldn't get persisted in the model file (just as init_score and weight are not persisted in the model file). That'd be a clean way to achieve behavior like "forced splits are only used at training time", I think. If you agree with that as a better long-term state, I can write up a feature request describing it.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That way, it wouldn't get persisted in the model file (just as init_score and weight are not persisted in the model file).

Hey, I think it's good idea!

@shiyu1994
Copy link
Collaborator

Thanks for the contribution. I will review this soon.

@shiyu1994 shiyu1994 self-assigned this Mar 6, 2025
Copy link
Collaborator

@shiyu1994 shiyu1994 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The changes look good to me in general. But let's wait for a conclusion of our discussion above.

@StrikerRUS
Copy link
Collaborator

/AzurePipelines run

@shiyu1994
Copy link
Collaborator

/AzurePipelines run

@StrikerRUS
Copy link
Collaborator

Kindly ping @jameslamb for this comment #6832 (comment).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[c++] forcedsplits_filename pointing at a non-existent file is silently ignored

4 participants