JobManagerTask Class

Specifies details of a Job Manager Task.

The Job Manager Task is automatically started when the Job is created. The Batch service tries to schedule the Job Manager Task before any other Tasks in the Job. When shrinking a Pool, the Batch service tries to preserve Nodes where Job Manager Tasks are running for as long as possible (that is, Compute Nodes running 'normal' Tasks are removed before Compute Nodes running Job Manager Tasks). When a Job Manager Task fails and needs to be restarted, the system tries to schedule it at the highest priority. If there are no idle Compute Nodes available, the system may terminate one of the running Tasks in the Pool and return it to the queue in order to make room for the Job Manager Task to restart. Note that a Job Manager Task in one Job does not have priority over Tasks in other Jobs. Across Jobs, only Job level priorities are observed. For example, if a Job Manager in a priority 0 Job needs to be restarted, it will not displace Tasks of a priority 1 Job. Batch will retry Tasks when a recovery operation is triggered on a Node. Examples of recovery operations include (but are not limited to) when an unhealthy Node is rebooted or a Compute Node disappeared due to host failure. Retries due to recovery operations are independent of and are not counted against the maxTaskRetryCount. Even if the maxTaskRetryCount is 0, an internal retry due to a recovery operation may occur. Because of this, all Tasks should be idempotent. This means Tasks need to tolerate being interrupted and restarted without causing any corruption or duplicate data. The best practice for long running Tasks is to use some form of checkpointing.

All required parameters must be populated in order to send to Azure.

Inheritance
msrest.serialization.Model
JobManagerTask

Constructor

JobManagerTask(*, id: str, command_line: str, display_name: str = None, container_settings=None, resource_files=None, output_files=None, environment_settings=None, constraints=None, required_slots: int = None, kill_job_on_completion: bool = None, user_identity=None, run_exclusive: bool = None, application_package_references=None, authentication_token_settings=None, allow_low_priority_node: bool = None, **kwargs)

Parameters

Name Description
id
Required
str

Required. The ID can contain any combination of alphanumeric characters including hyphens and underscores and cannot contain more than 64 characters.

display_name
Required
str

It need not be unique and can contain any Unicode characters up to a maximum length of 1024.

command_line
Required
str

Required. The command line does not run under a shell, and therefore cannot take advantage of shell features such as environment variable expansion. If you want to take advantage of such features, you should invoke the shell in the command line, for example using "cmd /c MyCommand" in Windows or "/bin/sh -c MyCommand" in Linux. If the command line refers to file paths, it should use a relative path (relative to the Task working directory), or use the Batch provided environment variable (https://docs.microsoft.com/en-us/azure/batch/batch-compute-node-environment-variables).

container_settings
Required

The settings for the container under which the Job Manager Task runs. If the Pool that will run this Task has containerConfiguration set, this must be set as well. If the Pool that will run this Task doesn't have containerConfiguration set, this must not be set. When this is specified, all directories recursively below the AZ_BATCH_NODE_ROOT_DIR (the root of Azure Batch directories on the node) are mapped into the container, all Task environment variables are mapped into the container, and the Task command line is executed in the container. Files produced in the container outside of AZ_BATCH_NODE_ROOT_DIR might not be reflected to the host disk, meaning that Batch file APIs will not be able to access those files.

resource_files
Required

Files listed under this element are located in the Task's working directory. There is a maximum size for the list of resource files. When the max size is exceeded, the request will fail and the response error code will be RequestEntityTooLarge. If this occurs, the collection of ResourceFiles must be reduced in size. This can be achieved using .zip files, Application Packages, or Docker Containers.

output_files
Required

For multi-instance Tasks, the files will only be uploaded from the Compute Node on which the primary Task is executed.

environment_settings
Required
constraints
Required

Constraints that apply to the Job Manager Task.

required_slots
Required
int

The number of scheduling slots that the Task requires to run. The default is 1. A Task can only be scheduled to run on a compute node if the node has enough free scheduling slots available. For multi-instance Tasks, this property is not supported and must not be specified.

kill_job_on_completion
Required

Whether completion of the Job Manager Task signifies completion of the entire Job. If true, when the Job Manager Task completes, the Batch service marks the Job as complete. If any Tasks are still running at this time (other than Job Release), those Tasks are terminated. If false, the completion of the Job Manager Task does not affect the Job status. In this case, you should either use the onAllTasksComplete attribute to terminate the Job, or have a client or user terminate the Job explicitly. An example of this is if the Job Manager creates a set of Tasks but then takes no further role in their execution. The default value is true. If you are using the onAllTasksComplete and onTaskFailure attributes to control Job lifetime, and using the Job Manager Task only to create the Tasks for the Job (not to monitor progress), then it is important to set killJobOnCompletion to false.

user_identity
Required

The user identity under which the Job Manager Task runs. If omitted, the Task runs as a non-administrative user unique to the Task.

run_exclusive
Required

Whether the Job Manager Task requires exclusive use of the Compute Node where it runs. If true, no other Tasks will run on the same Node for as long as the Job Manager is running. If false, other Tasks can run simultaneously with the Job Manager on a Compute Node. The Job Manager Task counts normally against the Compute Node's concurrent Task limit, so this is only relevant if the Compute Node allows multiple concurrent Tasks. The default value is true.

application_package_references
Required

Application Packages are downloaded and deployed to a shared directory, not the Task working directory. Therefore, if a referenced Application Package is already on the Compute Node, and is up to date, then it is not re-downloaded; the existing copy on the Compute Node is used. If a referenced Application Package cannot be installed, for example because the package has been deleted or because download failed, the Task fails.

authentication_token_settings
Required

The settings for an authentication token that the Task can use to perform Batch service operations. If this property is set, the Batch service provides the Task with an authentication token which can be used to authenticate Batch service operations without requiring an Account access key. The token is provided via the AZ_BATCH_AUTHENTICATION_TOKEN environment variable. The operations that the Task can carry out using the token depend on the settings. For example, a Task can request Job permissions in order to add other Tasks to the Job, or check the status of the Job or of other Tasks under the Job.

allow_low_priority_node
Required

Whether the Job Manager Task may run on a Spot/Low-priority Compute Node. The default value is true.

Keyword-Only Parameters

Name Description
id
Required
command_line
Required
display_name
Required
container_settings
Required
resource_files
Required
output_files
Required
environment_settings
Required
constraints
Required
required_slots
Required
kill_job_on_completion
Required
user_identity
Required
run_exclusive
Required
application_package_references
Required
authentication_token_settings
Required
allow_low_priority_node
Required

Methods

as_dict

Return a dict that can be JSONify using json.dump.

Advanced usage might optionally use a callback as parameter:

Key is the attribute name used in Python. Attr_desc is a dict of metadata. Currently contains 'type' with the msrest type and 'key' with the RestAPI encoded key. Value is the current value in this object.

The string returned will be used to serialize the key. If the return type is a list, this is considered hierarchical result dict.

See the three examples in this file:

  • attribute_transformer

  • full_restapi_key_transformer

  • last_restapi_key_transformer

If you want XML serialization, you can pass the kwargs is_xml=True.

deserialize

Parse a str using the RestAPI syntax and return a model.

enable_additional_properties_sending
from_dict

Parse a dict using given key extractor return a model.

By default consider key extractors (rest_key_case_insensitive_extractor, attribute_key_case_insensitive_extractor and last_rest_key_case_insensitive_extractor)

is_xml_model
serialize

Return the JSON that would be sent to azure from this model.

This is an alias to as_dict(full_restapi_key_transformer, keep_readonly=False).

If you want XML serialization, you can pass the kwargs is_xml=True.

validate

Validate this model recursively and return a list of ValidationError.

as_dict

Return a dict that can be JSONify using json.dump.

Advanced usage might optionally use a callback as parameter:

Key is the attribute name used in Python. Attr_desc is a dict of metadata. Currently contains 'type' with the msrest type and 'key' with the RestAPI encoded key. Value is the current value in this object.

The string returned will be used to serialize the key. If the return type is a list, this is considered hierarchical result dict.

See the three examples in this file:

  • attribute_transformer

  • full_restapi_key_transformer

  • last_restapi_key_transformer

If you want XML serialization, you can pass the kwargs is_xml=True.

as_dict(keep_readonly=True, key_transformer=<function attribute_transformer>, **kwargs)

Parameters

Name Description
key_transformer
<xref:function>

A key transformer function.

keep_readonly
default value: True

Returns

Type Description

A dict JSON compatible object

deserialize

Parse a str using the RestAPI syntax and return a model.

deserialize(data, content_type=None)

Parameters

Name Description
data
Required
str

A str using RestAPI structure. JSON by default.

content_type
str

JSON by default, set application/xml if XML.

default value: None

Returns

Type Description

An instance of this model

Exceptions

Type Description
DeserializationError if something went wrong

enable_additional_properties_sending

enable_additional_properties_sending()

from_dict

Parse a dict using given key extractor return a model.

By default consider key extractors (rest_key_case_insensitive_extractor, attribute_key_case_insensitive_extractor and last_rest_key_case_insensitive_extractor)

from_dict(data, key_extractors=None, content_type=None)

Parameters

Name Description
data
Required

A dict using RestAPI structure

content_type
str

JSON by default, set application/xml if XML.

default value: None
key_extractors
default value: None

Returns

Type Description

An instance of this model

Exceptions

Type Description
DeserializationError if something went wrong

is_xml_model

is_xml_model()

serialize

Return the JSON that would be sent to azure from this model.

This is an alias to as_dict(full_restapi_key_transformer, keep_readonly=False).

If you want XML serialization, you can pass the kwargs is_xml=True.

serialize(keep_readonly=False, **kwargs)

Parameters

Name Description
keep_readonly

If you want to serialize the readonly attributes

default value: False

Returns

Type Description

A dict JSON compatible object

validate

Validate this model recursively and return a list of ValidationError.

validate()

Returns

Type Description

A list of validation error