Skip to content

[CUDA] max_depth parameter is ignored #6969

@lorenzolucchese

Description

@lorenzolucchese

Description

When training a lightgbm model with device="cuda" the max_depth parameter appears to be ignored. When using instead device="cpu" or device="gpu" (i.e. OpenCL implementation) the max_depth constraint is enforced.

Reproducible example

import lightgbm as lgb
from sklearn.datasets import make_regression

# Generate synthetic regression data
X, y = make_regression(n_samples=1000, n_features=20, noise=0.1, random_state=42)

# Create and fit the LGBMRegressor with GPU support
model = lgb.LGBMRegressor(
    objective="regression",
    device="cuda",  # Use CUDA
    max_depth=10,
)
model.fit(X, y)

print(model.booster_.trees_to_dataframe().groupby("tree_index")["node_depth"].max())
tree_index
0      8
1      9
2      9
3      9
4      9
      ..
95    18
96    14
97    18
98    17
99    14
Name: node_depth, Length: 100, dtype: int64

Environment info

LightGBM version or commit hash: 4.6.0

Command(s) you used to install LightGBM

pixi add lightgbm

Metadata

Metadata

Assignees

Labels

buggpu (CUDA)Issue is related to the CUDA GPU variant.

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions