Partager via


Objets Apprenants Microsoftml

Description

Une instance des objets suivants est retournée par chaque fonction d’apprentissage. Ils héritent tous de la classe BaseLearner et implémentent des méthodes courantes.

  • get_algo_args retourne les paramètres d’apprentissage,

  • coef_ récupère les coefficients,

  • summary_ retourne les informations d’apprentissage.

Le contenu change en fonction de l’apprenant formé.

BaseLearner de classe

microsoftml.modules.base_learner.BaseLearner(**kwargs)

Classe de base pour tous les apprenants.

coef_

Obtient les coefficients de modèle.

fit(formula: str, data: [revoscalepy.datasource.RxDataSource.RxDataSource,
    pandas.core.frame.DataFrame], ml_transforms: list = None,
    ml_transform_vars: list = None, row_selection: str = None,
    transforms: dict = None, transform_objects: dict = None,
    transform_function: str = None,
    transform_variables: list = None,
    transform_packages: list = None,
    transform_environment: dict = None, blocks_per_read: int = None,
    report_progress: int = None, verbose: int = 1,
    compute_context: revoscalepy.computecontext.RxComputeContext.RxComputeContext = None,
    **kargs)

Ajuste le modèle.

get_algo_args()

Obtient les arguments de l’algorithme.

predict(*args, **kwargs)

Appelle microsoftml.rx_predict().

summary_

Obtenir le résumé du modèle.

Apprenants spécifiques

Modèle binaire ou de régression FastTree

microsoftml.FastTrees(method: ['binary', 'regression'] = 'binary',
    num_trees: int = 100, num_leaves: int = 20,
    learning_rate: float = 0.2, min_split: int = 10,
    example_fraction: float = 0.7, feature_fraction: float = 1,
    split_fraction: float = 1, num_bins: int = 255,
    first_use_penalty: float = 0, gain_conf_level: float = 0,
    unbalanced_sets: bool = False, train_threads: int = 8,
    random_seed: int = None,
    ensemble: microsoftml.modules.ensemble.EnsembleControl = None,
    **kargs)

Obtenir le nœud d’apprentissage

get_train_node(**all_args)

Svm d’une classe

microsoftml.OneClassSvm(cache_size: float = 100,
    kernel: [<function linear_kernel at 0x0000007156EAC8C8>,
    <function polynomial_kernel at 0x0000007156EAC950>,
    <function rbf_kernel at 0x0000007156EAC7B8>,
    <function sigmoid_kernel at 0x0000007156EACA60>] = {'Name': 'RbfKernel',
    'Settings': {}}, epsilon: float = 0.001, nu: float = 0.1,
    shrink: bool = True, normalize: ['No', 'Warn', 'Auto',
    'Yes'] = 'Auto',
    ensemble: microsoftml.modules.ensemble.EnsembleControl = None,
    **kargs)
get_train_node(**all_args)

Modèle binaire ou de régression FastForest

microsoftml.FastForest(method: ['binary', 'regression'] = 'binary',
    num_trees: int = 100, num_leaves: int = 20,
    min_split: int = 10, example_fraction: float = 0.7,
    feature_fraction: float = 0.7, split_fraction: float = 0.7,
    num_bins: int = 255, first_use_penalty: float = 0,
    gain_conf_level: float = 0, train_threads: int = 8,
    random_seed: int = None,
    ensemble: microsoftml.modules.ensemble.EnsembleControl = None,
    **kargs)
get_train_node(**all_args)

Modèle binaire ou de régression SDCA

microsoftml.FastLinear(method: ['binary', 'regression'] = 'binary',
    loss_function: {'binary': [<function hinge_loss at 0x0000007156E8EA60>,
    <function log_loss at 0x0000007156E8E6A8>,
    <function smoothed_hinge_loss at 0x0000007156E8EAE8>],
    'regression': [<function squared_loss at 0x0000007156E8E950>]} = None,
    l2_weight: float = None, l1_weight: float = None,
    train_threads: int = None, convergence_tolerance: float = 0.1,
    max_iterations: int = None, shuffle: bool = True,
    check_frequency: int = None, normalize: ['No', 'Warn', 'Auto',
    'Yes'] = 'Auto',
    ensemble: microsoftml.modules.ensemble.EnsembleControl = None,
    **kargs)
get_train_node(**all_args)

Régression logistique

microsoftml.LogisticRegression(method: ['binary',
    'multiClass'] = 'binary', l2_weight: float = 1,
    l1_weight: float = 1, opt_tol: float = 1e-07,
    memory_size: int = 20, init_wts_diameter: float = 0,
    max_iterations: int = 2147483647,
    show_training_stats: bool = False, sgd_init_tol: float = 0,
    train_threads: int = None, dense_optimizer: bool = False,
    normalize: ['No', 'Warn', 'Auto', 'Yes'] = 'Auto',
    ensemble: microsoftml.modules.ensemble.EnsembleControl = None,
    **kargs)

Réseau neuronal

microsoftml.NeuralNetwork(method: ['binary', 'multiClass',
    'regression'] = 'binary', num_hidden_nodes: int = 100,
    num_iterations: int = 100, optimizer: ['adadelta_optimizer',
    'sgd_optimizer'] = {'Name': 'SgdOptimizer', 'Settings': {}},
    net_definition: str = None, init_wts_diameter: float = 0.1,
    max_norm: float = 0, acceleration: ['avx_math', 'clr_math',
    'gpu_math', 'mkl_math', 'sse_math'] = {'Name': 'AvxMath',
    'Settings': {}}, mini_batch_size: int = 1, normalize: ['No',
    'Warn', 'Auto', 'Yes'] = 'Auto',
    ensemble: microsoftml.modules.ensemble.EnsembleControl = None,
    **kargs)
get_train_node(**all_args)

Obtenir un modèle aic

aic(k=2)

Obtenir des coefficients de modèle

coef_

Obtenir la déviance résiduelle

deviance_

Obtenir des arguments d’algorithme

get_algo_args()

Obtenir le nœud d’apprentissage

get_train_node(**all_args)

rx_fast_forest, rx_fast_trees, , rx_logistic_regressionrx_fast_linear, rx_neural_network, , rx_oneclass_svm,rx_predict