Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
41 changes: 30 additions & 11 deletions src/boosting/gbdt.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -83,10 +83,19 @@ void GBDT::Init(const Config* config, const Dataset* train_data, const Objective
// load forced_splits file
if (!config->forcedsplits_filename.empty()) {
std::ifstream forced_splits_file(config->forcedsplits_filename.c_str());
std::stringstream buffer;
buffer << forced_splits_file.rdbuf();
std::string err;
forced_splits_json_ = Json::parse(buffer.str(), &err);
if (!forced_splits_file.good()) {
Log::Warning("Forced splits file '%s' does not exist. Forced splits will be ignored.",
config->forcedsplits_filename.c_str());
Comment on lines +86 to +88
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this should be a fatal error at training time... if I'm training a model and expecting specific splits to be used, I'd prefer a big loud error to a training run wasting time and compute resources only to produce a model that accidentally does not look like what I'd wanted.

HOWEVER... I think GBDT::Init() and/or GBDT::ResetConfig() will also be called when you load a model at scoring time, and at scoring time we wouldn't want to get a fatal error because of a missing or malformed file which is only supposed to affect training.

I'm not certain how to resolve that. Can you please investigate that and propose something?

It would probably be helpful to add tests for these different conditions. You can do this in Python for this purpose. Or if you don't have time / interest, I can push some tests here and then you could work on making them pass?

So to be clear, the behavior I want to see is:

  • training time:
    • forcedsplits_filename file does not exist or is not readable --> ERROR
    • forcedsplits_filename is not valid JSON --> ERROR
  • prediction / scoring time:
    • forcedsplits_filename file does not exist or is not readable --> no log output, no errors
    • forcedsplits_filename is not valid JSON --> no log output, no errors

Copy link
Author

@KYash03 KYash03 Feb 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could add a flag to the GBDT class to indicate the current mode.

This is what I was thinking:

bool is_training_ = false;

// Turn the flag on at the start of training, and off at the end.
void GBDT::Train() {
  is_training_ = true;
  // ... regular training code ...
  is_training_ = false;
}

// In Init() and ResetConfig(), handle the file as follows:
if (is_training_) {
  // Stop with an error if anything is wrong.
} else {
  // Simply continue if there are issues.
}

Regarding the tests, I'd be happy to write them!

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks very much. It is not that simple.

For example, there are many workflows where training and prediction are done in the same process, using the same Booster. So a single property is_training_ is not going to work.

There are also multiple APIs for training.

void GBDT::Train(int snapshot_freq, const std::string& model_output_path) {

bool GBDT::TrainOneIter(const score_t* gradients, const score_t* hessians) {

And we'd also want to be careful to not introduce this type of checking on every boosting round, as that would hurt performance.

Maybe @shiyu1994 could help us figure out where to put a check like this.

Also referencing this related PR to help: #5653

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if we consider force split is forbidden in inference time? I think that also tells the user that force splitting is impossible when the model has already been trained.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Introducing a flag to check for whether the model is to be used for inference or training is quite complicated. That's why I think the current solution is acceptable.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jameslamb What do you think about keeping the current changes in this PR, given the reasons above?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jameslamb What do you think about keeping the current changes in this PR, given the reasons above?

Sorry for the delay.

I think just raising a warning is an acceptable compromise... it gives users a hint to follow, and by not being a fatal error it shouldn't cause problems at inference time.

This will mean that if you train a model with forced splits, save it to a file, then load it in another environment where that file referenced by forcedsplits_filename does not exist, you'll now get a warning about this. That might be annoying for people but I think it's worth it for the benefits mentioned above.

So for this PR... I support this, but @KYash03 please added tests for the conditions I mentioned in https://github.com/microsoft/LightGBM/pull/6832/files#r1957536985 (but with the "file does not exist or is not readable" case always resulting in this warning message in logs).


@shiyu1994 @StrikerRUS in the future, do you think we should move towards forced splits being considered "data" instead of a parameter? That way, it wouldn't get persisted in the model file (just as init_score and weight are not persisted in the model file). That'd be a clean way to achieve behavior like "forced splits are only used at training time", I think. If you agree with that as a better long-term state, I can write up a feature request describing it.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That way, it wouldn't get persisted in the model file (just as init_score and weight are not persisted in the model file).

Hey, I think it's good idea!

} else {
std::stringstream buffer;
buffer << forced_splits_file.rdbuf();
std::string err;
forced_splits_json_ = Json::parse(buffer.str(), &err);
if (!err.empty()) {
Log::Fatal("Failed to parse forced splits file '%s': %s",
config->forcedsplits_filename.c_str(), err.c_str());
}
}
}

objective_function_ = objective_function;
Expand Down Expand Up @@ -823,13 +832,23 @@ void GBDT::ResetConfig(const Config* config) {
if (config_.get() != nullptr && config_->forcedsplits_filename != new_config->forcedsplits_filename) {
// load forced_splits file
if (!new_config->forcedsplits_filename.empty()) {
std::ifstream forced_splits_file(
new_config->forcedsplits_filename.c_str());
std::stringstream buffer;
buffer << forced_splits_file.rdbuf();
std::string err;
forced_splits_json_ = Json::parse(buffer.str(), &err);
tree_learner_->SetForcedSplit(&forced_splits_json_);
std::ifstream forced_splits_file(new_config->forcedsplits_filename.c_str());
if (!forced_splits_file.good()) {
Log::Warning("Forced splits file '%s' does not exist. Forced splits will be ignored.",
new_config->forcedsplits_filename.c_str());
forced_splits_json_ = Json();
tree_learner_->SetForcedSplit(nullptr);
} else {
std::stringstream buffer;
buffer << forced_splits_file.rdbuf();
std::string err;
forced_splits_json_ = Json::parse(buffer.str(), &err);
if (!err.empty()) {
Log::Fatal("Failed to parse forced splits file '%s': %s",
new_config->forcedsplits_filename.c_str(), err.c_str());
}
tree_learner_->SetForcedSplit(&forced_splits_json_);
}
} else {
forced_splits_json_ = Json();
tree_learner_->SetForcedSplit(nullptr);
Expand Down
Loading