Compartir a través de


LightGbmRegressor Class

Gradient Boosted Decision Trees

Inheritance
nimbusml.internal.core.ensemble._lightgbmregressor.LightGbmRegressor
LightGbmRegressor
nimbusml.base_predictor.BasePredictor
LightGbmRegressor
sklearn.base.RegressorMixin
LightGbmRegressor

Constructor

LightGbmRegressor(number_of_iterations=100, learning_rate=None, number_of_leaves=None, minimum_example_count_per_leaf=None, booster=None, normalize='Auto', caching='Auto', evaluation_metric='RootMeanSquaredError', maximum_bin_count_per_feature=255, verbose=False, silent=True, number_of_threads=None, early_stopping_round=0, batch_size=1048576, use_categorical_split=None, handle_missing_value=True, minimum_example_count_per_group=100, maximum_categorical_split_point_count=32, categorical_smoothing=10.0, l2_categorical_regularization=10.0, random_state=None, parallel_trainer=None, feature=None, group_id=None, label=None, weight=None, **params)

Parameters

feature

see Columns.

group_id

see Columns.

label

see Columns.

weight

see Columns.

number_of_iterations

Number of iterations.

learning_rate

Determines the size of the step taken in the direction of the gradient in each step of the learning process. This determines how fast or slow the learner converges on the optimal solution. If the step size is too big, you might overshoot the optimal solution. If the step size is too small, training takes longer to converge to the best solution.

number_of_leaves

The maximum number of leaves (terminal nodes) that can be created in any tree. Higher values potentially increase the size of the tree and get better precision, but risk overfitting and requiring longer training times.

minimum_example_count_per_leaf

Minimum number of training instances required to form a leaf. That is, the minimal number of documents allowed in a leaf of regression tree, out of the sub-sampled data. A 'split' means that features in each level of the tree (node) are randomly divided.

booster

Which booster to use. Available options are:

  1. Dart

  2. Gbdt

  3. Goss.

normalize

If Auto, the choice to normalize depends on the preference declared by the algorithm. This is the default choice. If No, no normalization is performed. If Yes, normalization always performed. If Warn, if normalization is needed by the algorithm, a warning message is displayed but normalization is not performed. If normalization is performed, a MaxMin normalizer is used. This normalizer preserves sparsity by mapping zero to zero.

caching

Whether trainer should cache input training data.

evaluation_metric

Evaluation metrics.

maximum_bin_count_per_feature

Maximum number of bucket bin for features.

verbose

Verbose.

silent

Printing running messages.

number_of_threads

Number of parallel threads used to run LightGBM.

early_stopping_round

Rounds of early stopping, 0 will disable it.

batch_size

Number of entries in a batch when loading data.

use_categorical_split

Enable categorical split or not.

handle_missing_value

Enable special handling of missing value or not.

minimum_example_count_per_group

Minimum number of instances per categorical group.

maximum_categorical_split_point_count

Max number of categorical thresholds.

categorical_smoothing

Lapalace smooth term in categorical feature spilt. Avoid the bias of small categories.

l2_categorical_regularization

L2 Regularization for categorical split.

random_state

Sets the random seed for LightGBM to use.

parallel_trainer

Parallel LightGBM Learning Algorithm.

params

Additional arguments sent to compute engine.

Examples


   ###############################################################################
   # LightGbmRegressor
   from nimbusml import Pipeline, FileDataStream
   from nimbusml.datasets import get_dataset
   from nimbusml.ensemble import LightGbmRegressor
   from nimbusml.ensemble.booster import Gbdt
   from nimbusml.feature_extraction.categorical import OneHotVectorizer

   # data input (as a FileDataStream)
   path = get_dataset('infert').as_filepath()

   data = FileDataStream.read_csv(path)
   print(data.head())
   #    age  case education  induced  parity ... row_num  spontaneous  ...
   # 0   26     1    0-5yrs        1       6 ...       1            2  ...
   # 1   42     1    0-5yrs        1       1 ...       2            0  ...
   # 2   39     1    0-5yrs        2       6 ...       3            0  ...
   # 3   34     1    0-5yrs        2       4 ...       4            0  ...
   # 4   35     1   6-11yrs        1       3 ...       5            1  ...

   # define the training pipeline
   pipeline = Pipeline([
       OneHotVectorizer(columns={'edu': 'education'}),
       LightGbmRegressor(feature=['induced', 'edu'], label='age',
                         booster=Gbdt(reg_lambda=0.1))
   ])

   # train, predict, and evaluate
   metrics, predictions = pipeline.fit(data).test(data, output_scores=True)

   # print predictions
   print(predictions.head())
   #       Score
   # 0  34.008430
   # 1  34.008430
   # 2  33.160175
   # 3  33.160175
   # 4  32.472412
   # print evaluation metrics
   print(metrics)
   #   L1(avg)    L2(avg)  RMS(avg)  Loss-fn(avg)  R Squared
   # 0  4.10419  24.153105  4.914581     24.153105   0.120673

Remarks

Light GBM is an open source implementation of boosted trees. It is available in nimbusml as a binary classification trainer, a multi-class trainer, a regression trainer and a ranking trainer.

Reference

GitHub: LightGBM

Methods

get_params

Get the parameters for this operator.

get_params

Get the parameters for this operator.

get_params(deep=False)

Parameters

deep
default value: False