Azure ML deployment error from designer "There is no label column in "Scored dataset"

Marri, Ananda Krishna 1 Reputation point
2021-03-25T21:50:14.857+00:00

Hello All,

We have developed below model. Model built and trained successfully. But while deploying as realtime inference it is throwing the error.

81722-azure-ml-error.png

Below is the deployment log

2021/03/25 14:48:21 Attempt 1 of http call to http://10.0.0.4:16384/sendlogstoartifacts/info
2021/03/25 14:48:21 Attempt 1 of http call to http://10.0.0.4:16384/sendlogstoartifacts/status
[2021-03-25T14:48:23.131202] Entering context manager injector.
[context_manager_injector.py] Command line Options: Namespace(inject=['ProjectPythonPath:context_managers.ProjectPythonPath', 'Dataset:context_managers.Datasets', 'RunHistory:context_managers.RunHistory', 'TrackUserError:context_managers.TrackUserError'], invocation=['urldecode_invoker.py', 'python', '-m', 'azureml.studio.modulehost.module_invoker', '--module-name=azureml.studio.modules.ml.evaluate.evaluate_generic_module.evaluate_generic_module', '--evaluation-results', 'DatasetOutputConfig:Evaluation_results', '--scored-dataset=/mnt/batch/tasks/shared/LS_root/jobs/azuremldemo/azureml/7230c4b5-94a5-4af9-b5ff-5cd06433f19b/wd/tmpj67_av4g'])
Script type = None
Starting the daemon thread to refresh tokens in background for process with pid = 79
[2021-03-25T14:48:25.777114] Entering Run History Context Manager.
[2021-03-25T14:48:26.588216] Current directory: /mnt/batch/tasks/shared/LS_root/jobs/azuremldemo/azureml/7230c4b5-94a5-4af9-b5ff-5cd06433f19b/mounts/workspaceblobstore/azureml/7230c4b5-94a5-4af9-b5ff-5cd06433f19b
[2021-03-25T14:48:26.588401] Preparing to call script [urldecode_invoker.py] with arguments:['python', '-m', 'azureml.studio.modulehost.module_invoker', '--module-name=azureml.studio.modules.ml.evaluate.evaluate_generic_module.evaluate_generic_module', '--evaluation-results', '$Evaluation_results', '--scored-dataset=/mnt/batch/tasks/shared/LS_root/jobs/azuremldemo/azureml/7230c4b5-94a5-4af9-b5ff-5cd06433f19b/wd/tmpj67_av4g']
[2021-03-25T14:48:26.588510] After variable expansion, calling script [urldecode_invoker.py] with arguments:['python', '-m', 'azureml.studio.modulehost.module_invoker', '--module-name=azureml.studio.modules.ml.evaluate.evaluate_generic_module.evaluate_generic_module', '--evaluation-results', '/mnt/batch/tasks/shared/LS_root/jobs/azuremldemo/azureml/7230c4b5-94a5-4af9-b5ff-5cd06433f19b/wd/tmpbh275yuo', '--scored-dataset=/mnt/batch/tasks/shared/LS_root/jobs/azuremldemo/azureml/7230c4b5-94a5-4af9-b5ff-5cd06433f19b/wd/tmpj67_av4g']

2021/03/25 14:48:26 Not exporting to RunHistory as the exporter is either stopped or there is no data.
Stopped: false
OriginalData: 1
FilteredData: 0.
Session_id = 77d2289a-878a-4d00-99c0-9d4112bd03b4
Invoking module by urldecode_invoker 0.0.8.

Module type: official module.

Using runpy to invoke module 'azureml.studio.modulehost.module_invoker'.

2021-03-25 14:48:27,284 studio.modulehost INFO Reset logging level to DEBUG
2021-03-25 14:48:27,284 studio.modulehost INFO Load pyarrow.parquet explicitly: <module 'pyarrow.parquet' from '/azureml-envs/azureml_27ff1befbcbf963c2543a3994cfbad97/lib/python3.6/site-packages/pyarrow/parquet.py'>
2021-03-25 14:48:27,284 studio.core INFO execute_with_cli - Start:
2021-03-25 14:48:27,284 studio.modulehost INFO | ALGHOST 0.0.150
2021-03-25 14:48:28,130 studio.modulehost INFO | CLI arguments parsed: {'module_name': 'azureml.studio.modules.ml.evaluate.evaluate_generic_module.evaluate_generic_module', 'OutputPortsInternal': {'Evaluation results': '/mnt/batch/tasks/shared/LS_root/jobs/azuremldemo/azureml/7230c4b5-94a5-4af9-b5ff-5cd06433f19b/wd/tmpbh275yuo'}, 'InputPortsInternal': {'Scored dataset': '/mnt/batch/tasks/shared/LS_root/jobs/azuremldemo/azureml/7230c4b5-94a5-4af9-b5ff-5cd06433f19b/wd/tmpj67_av4g'}}
2021-03-25 14:48:28,139 studio.modulehost INFO | Invoking ModuleEntry(azureml.studio.modules.ml.evaluate.evaluate_generic_module.evaluate_generic_module; EvaluateModelModule; run)
2021-03-25 14:48:28,139 studio.core DEBUG | Input Ports:
2021-03-25 14:48:28,139 studio.core DEBUG | | Scored dataset = <azureml.studio.modulehost.cli_parser.CliInputValue object at 0x7fe5ceeae5c0>
2021-03-25 14:48:28,139 studio.core DEBUG | Output Ports:
2021-03-25 14:48:28,139 studio.core DEBUG | | Evaluation results = /mnt/batch/tasks/shared/LS_root/jobs/azuremldemo/azureml/7230c4b5-94a5-4af9-b5ff-5cd06433f19b/wd/tmpbh275yuo
2021-03-25 14:48:28,140 studio.core DEBUG | Parameters:
2021-03-25 14:48:28,140 studio.core DEBUG | | (empty)
2021-03-25 14:48:28,140 studio.core DEBUG | Environment Variables:
2021-03-25 14:48:28,140 studio.core DEBUG | | AZUREML_DATAREFERENCE_Scored_dataset = /mnt/batch/tasks/shared/LS_root/jobs/azuremldemo/azureml/7230c4b5-94a5-4af9-b5ff-5cd06433f19b/wd/tmpj67_av4g
2021-03-25 14:48:28,140 studio.core INFO | Reflect input ports and parameters - Start:
2021-03-25 14:48:28,141 studio.core INFO | | Handle input port "Scored dataset" - Start:
2021-03-25 14:48:28,141 studio.core INFO | | | Mount/Download dataset to '/mnt/batch/tasks/shared/LS_root/jobs/azuremldemo/azureml/7230c4b5-94a5-4af9-b5ff-5cd06433f19b/wd/tmpj67_av4g' - Start:
2021-03-25 14:48:28,141 studio.modulehost DEBUG | | | | Content of directory /mnt/batch/tasks/shared/LS_root/jobs/azuremldemo/azureml/7230c4b5-94a5-4af9-b5ff-5cd06433f19b/wd/tmpj67_av4g:
2021-03-25 14:48:28,482 studio.modulehost DEBUG | | | | | _meta.yaml
2021-03-25 14:48:28,482 studio.modulehost DEBUG | | | | | _samples.json
2021-03-25 14:48:28,482 studio.modulehost DEBUG | | | | | data.dataset
2021-03-25 14:48:28,483 studio.modulehost DEBUG | | | | | data.dataset.parquet
2021-03-25 14:48:28,483 studio.modulehost DEBUG | | | | | data.metadata
2021-03-25 14:48:28,483 studio.modulehost DEBUG | | | | | data.schema
2021-03-25 14:48:28,483 studio.modulehost DEBUG | | | | | data.visualization
2021-03-25 14:48:28,483 studio.modulehost DEBUG | | | | | data_type.json
2021-03-25 14:48:28,891 studio.modulehost DEBUG | | | | | schema/_schema.json
2021-03-25 14:48:28,891 studio.core INFO | | | Mount/Download dataset to '/mnt/batch/tasks/shared/LS_root/jobs/azuremldemo/azureml/7230c4b5-94a5-4af9-b5ff-5cd06433f19b/wd/tmpj67_av4g' - End with 0.7505s elapsed.
2021-03-25 14:48:28,892 studio.core INFO | | | Try to read from /mnt/batch/tasks/shared/LS_root/jobs/azuremldemo/azureml/7230c4b5-94a5-4af9-b5ff-5cd06433f19b/wd/tmpj67_av4g via meta - Start:
2021-03-25 14:48:29,215 studio.common INFO | | | | Load DataTableMeta successfully, path=data.dataset
2021-03-25 14:48:29,220 studio.common INFO | | | | Load meta data from directory successfully, data=DataFrameDirectory(meta={'type': 'DataFrameDirectory', 'visualization': [{'type': 'Visualization', 'path': 'data.visualization'}], 'extension': {'DataTableMeta': 'data.dataset'}, 'format': 'Parquet', 'data': 'data.dataset.parquet', 'samples': '_samples.json', 'schema': 'schema/_schema.json'}), type=<class 'azureml.studio.common.datatable.data_table_directory.DataTableDirectory'>
2021-03-25 14:48:29,224 studio.core INFO | | | Try to read from /mnt/batch/tasks/shared/LS_root/jobs/azuremldemo/azureml/7230c4b5-94a5-4af9-b5ff-5cd06433f19b/wd/tmpj67_av4g via meta - End with 0.3316s elapsed.
2021-03-25 14:48:29,224 studio.core INFO | | Handle input port "Scored dataset" - End with 1.0836s elapsed.
2021-03-25 14:48:29,225 studio.core INFO | | Handle input port "Scored dataset to compare" - Start:
2021-03-25 14:48:29,225 studio.modulehost WARNING | | | File 'None' does not exist.
2021-03-25 14:48:29,225 studio.core INFO | | Handle input port "Scored dataset to compare" - End with 0.0001s elapsed.
2021-03-25 14:48:29,225 studio.core INFO | Reflect input ports and parameters - End with 1.0843s elapsed.
2021-03-25 14:48:29,225 studio.core INFO | EvaluateModelModule.run - Start:
2021-03-25 14:48:29,225 studio.core DEBUG | | kwargs:
2021-03-25 14:48:29,225 studio.core DEBUG | | | scored_data = <azureml.studio.common.datatable.data_table.DataTable object at 0x7fe5cee5a668>
2021-03-25 14:48:29,225 studio.core DEBUG | | | scored_data_to_compare = None
2021-03-25 14:48:29,226 studio.core DEBUG | | validated_args:
2021-03-25 14:48:29,226 studio.core DEBUG | | | scored_data = <azureml.studio.common.datatable.data_table.DataTable object at 0x7fe5cee5a668>
2021-03-25 14:48:29,226 studio.core DEBUG | | | scored_data_to_compare = None
2021-03-25 14:48:29,226 studio.module INFO | | Validate input data (Scored Data).
2021-03-25 14:48:29,227 studio.core INFO | EvaluateModelModule.run - End with 0.0014s elapsed.
2021-03-25 14:48:29,227 studio.modulehost INFO | Set error info in module statistics
2021-03-25 14:48:29,227 studio.core INFO | Logging exception information of module execution - Start:
2021-03-25 14:48:29,227 studio.modulehost INFO | | Session_id = 77d2289a-878a-4d00-99c0-9d4112bd03b4
2021-03-25 14:48:29,227 studio.core INFO | | ModuleStatistics.log_stack_trace_telemetry - Start:
2021-03-25 14:48:29,769 studio.core INFO | | ModuleStatistics.log_stack_trace_telemetry - End with 0.5417s elapsed.
2021-03-25 14:48:29,769 studio.modulehost ERROR | | Get ModuleError when invoking ModuleEntry(azureml.studio.modules.ml.evaluate.evaluate_generic_module.evaluate_generic_module; EvaluateModelModule; run)
Traceback (most recent call last):
File "/azureml-envs/azureml_27ff1befbcbf963c2543a3994cfbad97/lib/python3.6/site-packages/azureml/studio/modulehost/module_reflector.py", line 379, in exec
output_tuple = self._entry.func(**reflected_input_ports, **reflected_parameters)

reflected_input_ports = {'scored_data': <azureml.studio.common.datatable.data_table.DataTable object at 0x7fe5cee5a668>, 'scored_data_to_compare': None}

  > reflected_parameters = {}  

  > self = <azureml.studio.modulehost.module_reflector.ModuleReflector object at 0x7fe5ceeae358>  

File "/azureml-envs/azureml_27ff1befbcbf963c2543a3994cfbad97/lib/python3.6/site-packages/azureml/studio/modulehost/module_reflector.py", line 76, in wrapper
ret = func(*args, **validated_args)

func = <function EvaluateModelModule.run at 0x7fe5cee95f28>

  > args = ()  

  > validated_args = {'scored_data': <azureml.studio.common.datatable.data_table.DataTable object at 0x7fe5cee5a668>, 'scored_data_to_compare': None}  

File "/azureml-envs/azureml_27ff1befbcbf963c2543a3994cfbad97/lib/python3.6/site-packages/azureml/studio/modules/ml/evaluate/evaluate_generic_module/evaluate_generic_module.py", line 57, in run
output_values = EvaluateModelModule.evaluate_generic(**input_values)

input_values = {'scored_data_to_compare': None, 'scored_data': <azureml.studio.common.datatable.data_table.DataTable object at 0x7fe5cee5a668>, 'input_values': {...}}

File "/azureml-envs/azureml_27ff1befbcbf963c2543a3994cfbad97/lib/python3.6/site-packages/azureml/studio/modules/ml/evaluate/evaluate_generic_module/evaluate_generic_module.py", line 168, in evaluate_generic
cls._validate_input(scored_data=scored_data, scored_data_to_compare=scored_data_to_compare)

cls = <class 'azureml.studio.modules.ml.evaluate.evaluate_generic_module.evaluate_generic_module.EvaluateModelModule'>

  > scored_data = <azureml.studio.common.datatable.data_table.DataTable object at 0x7fe5cee5a668>  

  > scored_data_to_compare = None  

File "/azureml-envs/azureml_27ff1befbcbf963c2543a3994cfbad97/lib/python3.6/site-packages/azureml/studio/modules/ml/evaluate/evaluate_generic_module/evaluate_generic_module.py", line 146, in _validate_input
dataset_name=cls._args.scored_data.friendly_name)

File "/azureml-envs/azureml_27ff1befbcbf963c2543a3994cfbad97/lib/python3.6/site-packages/azureml/studio/modules/ml/evaluate/evaluate_generic_module/evaluate_generic_module.py", line 133, in _validate_data_table
error_setting.ErrorMapping.throw(error_setting.NotLabeledDatasetError(dataset_name=dataset_name))

dataset_name = 'Scored dataset'

File "/azureml-envs/azureml_27ff1befbcbf963c2543a3994cfbad97/lib/python3.6/site-packages/azureml/studio/common/error.py", line 821, in throw
raise err

err = NotLabeledDatasetError('There is no label column in "Scored dataset".',)

NotLabeledDatasetError: There is no label column in "Scored dataset".
2021-03-25 14:48:29,771 studio.core INFO | Logging exception information of module execution - End with 0.5435s elapsed.
2021-03-25 14:48:29,771 studio.core INFO | ModuleStatistics.save_to_azureml - Start:
2021-03-25 14:48:30,030 studio.core INFO | ModuleStatistics.save_to_azureml - End with 0.2591s elapsed.
2021-03-25 14:48:30,030 studio.core INFO execute_with_cli - End with 2.7457s elapsed.
Starting the daemon thread to refresh tokens in background for process with pid = 79

[2021-03-25T14:48:30.046295] The experiment failed. Finalizing run...
Cleaning up all outstanding Run operations, waiting 900.0 seconds
3 items cleaning up...
Cleanup took 0.22303390502929688 seconds
Starting the daemon thread to refresh tokens in background for process with pid = 79
Traceback (most recent call last):
File "/azureml-envs/azureml_27ff1befbcbf963c2543a3994cfbad97/lib/python3.6/site-packages/azureml/studio/modulehost/module_invoker.py", line 7, in <module>
execute(sys.argv)
File "/azureml-envs/azureml_27ff1befbcbf963c2543a3994cfbad97/lib/python3.6/site-packages/azureml/studio/modulehost/module_host_executor.py", line 41, in execute
return execute_with_cli(original_args)
File "/azureml-envs/azureml_27ff1befbcbf963c2543a3994cfbad97/lib/python3.6/site-packages/azureml/studio/core/logger.py", line 209, in wrapper
ret = func(*args, **kwargs)
File "/azureml-envs/azureml_27ff1befbcbf963c2543a3994cfbad97/lib/python3.6/site-packages/azureml/studio/modulehost/module_host_executor.py", line 52, in execute_with_cli
do_execute_with_env(parser, FolderRuntimeEnv())
File "/azureml-envs/azureml_27ff1befbcbf963c2543a3994cfbad97/lib/python3.6/site-packages/azureml/studio/modulehost/module_host_executor.py", line 68, in do_execute_with_env
module_statistics_folder=parser.module_statistics_folder
File "/azureml-envs/azureml_27ff1befbcbf963c2543a3994cfbad97/lib/python3.6/site-packages/azureml/studio/modulehost/module_reflector.py", line 397, in exec
self._handle_exception(bex)
File "/azureml-envs/azureml_27ff1befbcbf963c2543a3994cfbad97/lib/python3.6/site-packages/azureml/studio/modulehost/module_reflector.py", line 471, in _handle_exception
raise exception
File "/azureml-envs/azureml_27ff1befbcbf963c2543a3994cfbad97/lib/python3.6/site-packages/azureml/studio/modulehost/module_reflector.py", line 379, in exec
output_tuple = self._entry.func(**reflected_input_ports, **reflected_parameters)
File "/azureml-envs/azureml_27ff1befbcbf963c2543a3994cfbad97/lib/python3.6/site-packages/azureml/studio/modulehost/module_reflector.py", line 76, in wrapper
ret = func(*args, **validated_args)
File "/azureml-envs/azureml_27ff1befbcbf963c2543a3994cfbad97/lib/python3.6/site-packages/azureml/studio/modules/ml/evaluate/evaluate_generic_module/evaluate_generic_module.py", line 57, in run
output_values = EvaluateModelModule.evaluate_generic(**input_values)
File "/azureml-envs/azureml_27ff1befbcbf963c2543a3994cfbad97/lib/python3.6/site-packages/azureml/studio/modules/ml/evaluate/evaluate_generic_module/evaluate_generic_module.py", line 168, in evaluate_generic
cls._validate_input(scored_data=scored_data, scored_data_to_compare=scored_data_to_compare)
File "/azureml-envs/azureml_27ff1befbcbf963c2543a3994cfbad97/lib/python3.6/site-packages/azureml/studio/modules/ml/evaluate/evaluate_generic_module/evaluate_generic_module.py", line 146, in _validate_input
dataset_name=cls._args.scored_data.friendly_name)
File "/azureml-envs/azureml_27ff1befbcbf963c2543a3994cfbad97/lib/python3.6/site-packages/azureml/studio/modules/ml/evaluate/evaluate_generic_module/evaluate_generic_module.py", line 133, in _validate_data_table
error_setting.ErrorMapping.throw(error_setting.NotLabeledDatasetError(dataset_name=dataset_name))
File "/azureml-envs/azureml_27ff1befbcbf963c2543a3994cfbad97/lib/python3.6/site-packages/azureml/studio/common/error.py", line 821, in throw
raise err
azureml.studio.common.error.NotLabeledDatasetError: There is no label column in "Scored dataset".

[2021-03-25T14:48:31.272570] Finished context manager injector with Exception.
2021/03/25 14:48:32 Could not parse control script error at path: /mnt/batch/tasks/workitems/335323a5-4a6b-472a-8676-f079f4b45127/job-1/7230c4b5-94a5-4af9-b_d7e8588d-bb39-4e7e-b730-1d9bc350919b/wd/runTaskLetTask_error.json because: File /mnt/batch/tasks/workitems/335323a5-4a6b-472a-8676-f079f4b45127/job-1/7230c4b5-94a5-4af9-b_d7e8588d-bb39-4e7e-b730-1d9bc350919b/wd/runTaskLetTask_error.json doesn't exist, continuing without
2021/03/25 14:48:32 Failed to run the wrapper cmd with err: exit status 1
2021/03/25 14:48:32 Attempt 1 of http call to http://10.0.0.4:16384/sendlogstoartifacts/status
2021/03/25 14:48:32 mpirun version string: {
Intel(R) MPI Library for Linux* OS, Version 2018 Update 3 Build 20180411 (id: 18329)
Copyright 2003-2018 Intel Corporation.
}
2021/03/25 14:48:32 MPI publisher: intel ; version: 2018
2021/03/25 14:48:32 Not exporting to RunHistory as the exporter is either stopped or there is no data.
Stopped: false
OriginalData: 2
FilteredData: 0.
2021/03/25 14:48:32 Process Exiting with Code: 1

Azure Machine Learning
Azure Machine Learning
An Azure machine learning service for building and deploying models.
3,334 questions
{count} votes

1 answer

Sort by: Most helpful
  1. Vahid Ghafarpour 23,385 Reputation points Volunteer Moderator
    2023-08-06T06:57:54.57+00:00

    By addressing the issue related to the "NotLabeledDatasetError" and verifying the dataset and module configurations, you should be able to resolve the error and successfully run the experiment.

    0 comments No comments

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.