I have been trying to run an AutoML Forecasting Experiment with only allowing one algorithm (FBProphet) to run and no other supported algorithms. The issue I run into is that even though I specify the blocked algorithms, they still run in the experiment taking up unnecessary runtime. For eg, my experiment should run only for 1-2 hours but it ends up running for 24-30 hours because it still runs the undesired algorithms. Is there any way I can stop making the blocked algorithms from running in my experiment so I can save up on significant runtime? I have attached a screenshot and my AutoML config code to help understand this issue better.
Code:
n_test_periods = 60
blocked_algos = ['ExtremeRandomTrees','DecisionTree','ElasticNet','LassoLars']
time_series_settings = {
'time_column_name': time_column_name,
'grain_column_names': grain_column_names ,
'forecast_horizon': n_test_periods
}
automl_config = AutoMLConfig(task='forecasting',
debug_log='Logs/prophet_forecasting_errors.log',
primary_metric='normalized_mean_absolute_error',
training_data=train_data,
label_column_name=target_column_name,
compute_target=compute_target,
featurization= 'off',
blocked_model = blocked_algos,
allowed_models = ['Prophet'],
n_cross_validations= 3,
verbosity=logging.INFO,
max_cores_per_iteration=6,
**time_series_settings)
remote_run = experiment.submit(automl_config, show_output=True)
Screenshot of the Experiment: (This took 32h when it should ideally take 56 mins)