@Tom-Zhou Thanks for the details, Here are the azure ml samples..
Please follow the below doc for azure machine learning.
https://learn.microsoft.com/en-us/azure/machine-learning/
This browser is no longer supported.
Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.
I followed the samples from DP-100 lab 8A.
https://github.com/MicrosoftLearning/DP100/blob/master/08A%20-%20Tuning%20Hyperparameters.ipynb
I tried to parameter tunning for randomforest regressor on Boston Data.
However, the code is running. I am not able to get metric and the output of the result.
What is the problem.
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import argparse
import joblib
import os
from azureml.core import Run
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
# Set regularization parameter
parser = argparse.ArgumentParser()
parser.add_argument('--regularization', type=float, dest='reg_rate', default=0.01, help='regularization rate')
args = parser.parse_args()
reg = args.reg_rate
# Get the experiment run context
run = Run.get_context()
# load the diabetes dataset
print("Loading Data...")
diabetes = run.input_datasets['diabetes'].to_pandas_dataframe() # Get the training data from the estimator input
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a logistic regression model
print('Training a logistic regression model with regularization rate of', reg)
run.log('Regularization Rate', np.float(reg))
model = LogisticRegression(C=1/reg, solver="liblinear").fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
from azureml.core import Experiment
from azureml.train.sklearn import SKLearn
from azureml.train.hyperdrive import GridParameterSampling, MedianStoppingPolicy, HyperDriveConfig, PrimaryMetricGoal, choice, normal
from azureml.widgets import RunDetails
# Sample a range of parameter values
params = GridParameterSampling(
{
# Tuning the Parameters
'--max_depth':choice(70,100,130,160)
}
)
# Get the training dataset
boston_ds = ws.datasets.get("boston dataset")
# Create an estimator that uses the remote compute
hyper_estimator = SKLearn(source_directory=experiment_folder,
inputs=[boston_ds.as_named_input('boston')], # Pass the dataset as an input...
pip_packages=['azureml-sdk'], # ...so we need azureml-dataprep (it's in the SDK!)
entry_script='train_boston.py',
compute_target = training_cluster,)
#early_termination_policy = MedianStoppingPolicy(evaluation_interval=1, delay_evaluation=5)
# Configure hyperdrive settings
hyperdrive = HyperDriveConfig(estimator=hyper_estimator,
hyperparameter_sampling=params,
policy=None,
primary_metric_name='MAE',
primary_metric_goal= PrimaryMetricGoal.MINIMIZE,
max_total_runs=6,
max_concurrent_runs=4)
# Run the experiment
experiment = Experiment(workspace = ws, name = 'boston_training_hyperdrive')
run = experiment.submit(config=hyperdrive)
# Show the status in the notebook as the experiment runs
RunDetails(run).show()
run.wait_for_completion()
@Tom-Zhou Thanks for the details, Here are the azure ml samples..
Please follow the below doc for azure machine learning.
https://learn.microsoft.com/en-us/azure/machine-learning/