ArmMachineLearningModelFactory.SparkJob Method
Definition
Important
Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
Overloads
SparkJob(String, IDictionary<String,String>, IDictionary<String, String>, ResourceIdentifier, ResourceIdentifier, String, String, MachineLearningIdentityConfiguration, Nullable<Boolean>, NotificationSetting, IDictionary<String,SecretConfiguration>, IDictionary<String, MachineLearningJobService>, Nullable<MachineLearningJobStatus>, IEnumerable<String>, String, String, IDictionary<String,String>, SparkJobEntry, String, IEnumerable<String>, IDictionary<String, MachineLearningJobInput>, IEnumerable<String>, IDictionary<String, MachineLearningJobOutput>, IEnumerable<String>, JobQueueSettings, SparkResourceConfiguration)
Initializes a new instance of SparkJob.
public static Azure.ResourceManager.MachineLearning.Models.SparkJob SparkJob (string description = default, System.Collections.Generic.IDictionary<string,string> properties = default, System.Collections.Generic.IDictionary<string,string> tags = default, Azure.Core.ResourceIdentifier componentId = default, Azure.Core.ResourceIdentifier computeId = default, string displayName = default, string experimentName = default, Azure.ResourceManager.MachineLearning.Models.MachineLearningIdentityConfiguration identity = default, bool? isArchived = default, Azure.ResourceManager.MachineLearning.Models.NotificationSetting notificationSetting = default, System.Collections.Generic.IDictionary<string,Azure.ResourceManager.MachineLearning.Models.SecretConfiguration> secretsConfiguration = default, System.Collections.Generic.IDictionary<string,Azure.ResourceManager.MachineLearning.Models.MachineLearningJobService> services = default, Azure.ResourceManager.MachineLearning.Models.MachineLearningJobStatus? status = default, System.Collections.Generic.IEnumerable<string> archives = default, string args = default, string codeId = default, System.Collections.Generic.IDictionary<string,string> conf = default, Azure.ResourceManager.MachineLearning.Models.SparkJobEntry entry = default, string environmentId = default, System.Collections.Generic.IEnumerable<string> files = default, System.Collections.Generic.IDictionary<string,Azure.ResourceManager.MachineLearning.Models.MachineLearningJobInput> inputs = default, System.Collections.Generic.IEnumerable<string> jars = default, System.Collections.Generic.IDictionary<string,Azure.ResourceManager.MachineLearning.Models.MachineLearningJobOutput> outputs = default, System.Collections.Generic.IEnumerable<string> pyFiles = default, Azure.ResourceManager.MachineLearning.Models.JobQueueSettings queueSettings = default, Azure.ResourceManager.MachineLearning.Models.SparkResourceConfiguration resources = default);
static member SparkJob : string * System.Collections.Generic.IDictionary<string, string> * System.Collections.Generic.IDictionary<string, string> * Azure.Core.ResourceIdentifier * Azure.Core.ResourceIdentifier * string * string * Azure.ResourceManager.MachineLearning.Models.MachineLearningIdentityConfiguration * Nullable<bool> * Azure.ResourceManager.MachineLearning.Models.NotificationSetting * System.Collections.Generic.IDictionary<string, Azure.ResourceManager.MachineLearning.Models.SecretConfiguration> * System.Collections.Generic.IDictionary<string, Azure.ResourceManager.MachineLearning.Models.MachineLearningJobService> * Nullable<Azure.ResourceManager.MachineLearning.Models.MachineLearningJobStatus> * seq<string> * string * string * System.Collections.Generic.IDictionary<string, string> * Azure.ResourceManager.MachineLearning.Models.SparkJobEntry * string * seq<string> * System.Collections.Generic.IDictionary<string, Azure.ResourceManager.MachineLearning.Models.MachineLearningJobInput> * seq<string> * System.Collections.Generic.IDictionary<string, Azure.ResourceManager.MachineLearning.Models.MachineLearningJobOutput> * seq<string> * Azure.ResourceManager.MachineLearning.Models.JobQueueSettings * Azure.ResourceManager.MachineLearning.Models.SparkResourceConfiguration -> Azure.ResourceManager.MachineLearning.Models.SparkJob
Public Shared Function SparkJob (Optional description As String = Nothing, Optional properties As IDictionary(Of String, String) = Nothing, Optional tags As IDictionary(Of String, String) = Nothing, Optional componentId As ResourceIdentifier = Nothing, Optional computeId As ResourceIdentifier = Nothing, Optional displayName As String = Nothing, Optional experimentName As String = Nothing, Optional identity As MachineLearningIdentityConfiguration = Nothing, Optional isArchived As Nullable(Of Boolean) = Nothing, Optional notificationSetting As NotificationSetting = Nothing, Optional secretsConfiguration As IDictionary(Of String, SecretConfiguration) = Nothing, Optional services As IDictionary(Of String, MachineLearningJobService) = Nothing, Optional status As Nullable(Of MachineLearningJobStatus) = Nothing, Optional archives As IEnumerable(Of String) = Nothing, Optional args As String = Nothing, Optional codeId As String = Nothing, Optional conf As IDictionary(Of String, String) = Nothing, Optional entry As SparkJobEntry = Nothing, Optional environmentId As String = Nothing, Optional files As IEnumerable(Of String) = Nothing, Optional inputs As IDictionary(Of String, MachineLearningJobInput) = Nothing, Optional jars As IEnumerable(Of String) = Nothing, Optional outputs As IDictionary(Of String, MachineLearningJobOutput) = Nothing, Optional pyFiles As IEnumerable(Of String) = Nothing, Optional queueSettings As JobQueueSettings = Nothing, Optional resources As SparkResourceConfiguration = Nothing) As SparkJob
Parameters
- description
- String
The asset description text.
- properties
- IDictionary<String,String>
The asset property dictionary.
- tags
- IDictionary<String,String>
Tag dictionary. Tags can be added, removed, and updated.
- componentId
- ResourceIdentifier
ARM resource ID of the component resource.
- computeId
- ResourceIdentifier
ARM resource ID of the compute resource.
- displayName
- String
Display name of job.
- experimentName
- String
The name of the experiment the job belongs to. If not set, the job is placed in the "Default" experiment.
- identity
- MachineLearningIdentityConfiguration
Identity configuration. If set, this should be one of AmlToken, ManagedIdentity, UserIdentity or null. Defaults to AmlToken if null. Please note MachineLearningIdentityConfiguration is the base class. According to the scenario, a derived class of the base class might need to be assigned here, or this property needs to be casted to one of the possible derived classes. The available derived classes include AmlToken, MachineLearningManagedIdentity and MachineLearningUserIdentity.
- notificationSetting
- NotificationSetting
Notification setting for the job.
- secretsConfiguration
- IDictionary<String,SecretConfiguration>
Configuration for secrets to be made available during runtime.
- services
- IDictionary<String,MachineLearningJobService>
List of JobEndpoints. For local jobs, a job endpoint will have an endpoint value of FileStreamObject.
- status
- Nullable<MachineLearningJobStatus>
Status of the job.
- archives
- IEnumerable<String>
Archive files used in the job.
- args
- String
Arguments for the job.
- codeId
- String
[Required] ARM resource ID of the code asset.
- conf
- IDictionary<String,String>
Spark configured properties.
- entry
- SparkJobEntry
[Required] The entry to execute on startup of the job. Please note SparkJobEntry is the base class. According to the scenario, a derived class of the base class might need to be assigned here, or this property needs to be casted to one of the possible derived classes. The available derived classes include SparkJobPythonEntry and SparkJobScalaEntry.
- environmentId
- String
The ARM resource ID of the Environment specification for the job.
- files
- IEnumerable<String>
Files used in the job.
- inputs
- IDictionary<String,MachineLearningJobInput>
Mapping of input data bindings used in the job. Please note MachineLearningJobInput is the base class. According to the scenario, a derived class of the base class might need to be assigned here, or this property needs to be casted to one of the possible derived classes. The available derived classes include MachineLearningCustomModelJobInput, MachineLearningLiteralJobInput, MachineLearningFlowModelJobInput, MachineLearningTableJobInput, MachineLearningTritonModelJobInput, MachineLearningUriFileJobInput and MachineLearningUriFolderJobInput.
- jars
- IEnumerable<String>
Jar files used in the job.
- outputs
- IDictionary<String,MachineLearningJobOutput>
Mapping of output data bindings used in the job. Please note MachineLearningJobOutput is the base class. According to the scenario, a derived class of the base class might need to be assigned here, or this property needs to be casted to one of the possible derived classes. The available derived classes include MachineLearningCustomModelJobOutput, MachineLearningFlowModelJobOutput, MachineLearningTableJobOutput, MachineLearningTritonModelJobOutput, MachineLearningUriFileJobOutput and MachineLearningUriFolderJobOutput.
- pyFiles
- IEnumerable<String>
Python files used in the job.
- queueSettings
- JobQueueSettings
Queue settings for the job.
- resources
- SparkResourceConfiguration
Compute Resource configuration for the job.
Returns
A new SparkJob instance for mocking.
Applies to
SparkJob(String, IDictionary<String,String>, IDictionary<String, String>, String, Nullable<MachineLearningJobStatus>, String, IDictionary<String, MachineLearningJobService>, ResourceIdentifier, Nullable<Boolean>, MachineLearningIdentityConfiguration, ResourceIdentifier, NotificationSetting, SparkResourceConfiguration, String, ResourceIdentifier, SparkJobEntry, ResourceIdentifier, IDictionary<String,MachineLearningJobInput>, IDictionary<String,MachineLearningJobOutput>, IEnumerable<String>, IEnumerable<String>, IEnumerable<String>, IEnumerable<String>, IDictionary<String,String>, Nullable<JobTier>, IDictionary<String, String>)
Initializes a new instance of SparkJob.
public static Azure.ResourceManager.MachineLearning.Models.SparkJob SparkJob (string description = default, System.Collections.Generic.IDictionary<string,string> tags = default, System.Collections.Generic.IDictionary<string,string> properties = default, string displayName = default, Azure.ResourceManager.MachineLearning.Models.MachineLearningJobStatus? status = default, string experimentName = default, System.Collections.Generic.IDictionary<string,Azure.ResourceManager.MachineLearning.Models.MachineLearningJobService> services = default, Azure.Core.ResourceIdentifier computeId = default, bool? isArchived = default, Azure.ResourceManager.MachineLearning.Models.MachineLearningIdentityConfiguration identity = default, Azure.Core.ResourceIdentifier componentId = default, Azure.ResourceManager.MachineLearning.Models.NotificationSetting notificationSetting = default, Azure.ResourceManager.MachineLearning.Models.SparkResourceConfiguration resources = default, string args = default, Azure.Core.ResourceIdentifier codeId = default, Azure.ResourceManager.MachineLearning.Models.SparkJobEntry entry = default, Azure.Core.ResourceIdentifier environmentId = default, System.Collections.Generic.IDictionary<string,Azure.ResourceManager.MachineLearning.Models.MachineLearningJobInput> inputs = default, System.Collections.Generic.IDictionary<string,Azure.ResourceManager.MachineLearning.Models.MachineLearningJobOutput> outputs = default, System.Collections.Generic.IEnumerable<string> pyFiles = default, System.Collections.Generic.IEnumerable<string> jars = default, System.Collections.Generic.IEnumerable<string> files = default, System.Collections.Generic.IEnumerable<string> archives = default, System.Collections.Generic.IDictionary<string,string> conf = default, Azure.ResourceManager.MachineLearning.Models.JobTier? queueJobTier = default, System.Collections.Generic.IDictionary<string,string> environmentVariables = default);
static member SparkJob : string * System.Collections.Generic.IDictionary<string, string> * System.Collections.Generic.IDictionary<string, string> * string * Nullable<Azure.ResourceManager.MachineLearning.Models.MachineLearningJobStatus> * string * System.Collections.Generic.IDictionary<string, Azure.ResourceManager.MachineLearning.Models.MachineLearningJobService> * Azure.Core.ResourceIdentifier * Nullable<bool> * Azure.ResourceManager.MachineLearning.Models.MachineLearningIdentityConfiguration * Azure.Core.ResourceIdentifier * Azure.ResourceManager.MachineLearning.Models.NotificationSetting * Azure.ResourceManager.MachineLearning.Models.SparkResourceConfiguration * string * Azure.Core.ResourceIdentifier * Azure.ResourceManager.MachineLearning.Models.SparkJobEntry * Azure.Core.ResourceIdentifier * System.Collections.Generic.IDictionary<string, Azure.ResourceManager.MachineLearning.Models.MachineLearningJobInput> * System.Collections.Generic.IDictionary<string, Azure.ResourceManager.MachineLearning.Models.MachineLearningJobOutput> * seq<string> * seq<string> * seq<string> * seq<string> * System.Collections.Generic.IDictionary<string, string> * Nullable<Azure.ResourceManager.MachineLearning.Models.JobTier> * System.Collections.Generic.IDictionary<string, string> -> Azure.ResourceManager.MachineLearning.Models.SparkJob
Public Shared Function SparkJob (Optional description As String = Nothing, Optional tags As IDictionary(Of String, String) = Nothing, Optional properties As IDictionary(Of String, String) = Nothing, Optional displayName As String = Nothing, Optional status As Nullable(Of MachineLearningJobStatus) = Nothing, Optional experimentName As String = Nothing, Optional services As IDictionary(Of String, MachineLearningJobService) = Nothing, Optional computeId As ResourceIdentifier = Nothing, Optional isArchived As Nullable(Of Boolean) = Nothing, Optional identity As MachineLearningIdentityConfiguration = Nothing, Optional componentId As ResourceIdentifier = Nothing, Optional notificationSetting As NotificationSetting = Nothing, Optional resources As SparkResourceConfiguration = Nothing, Optional args As String = Nothing, Optional codeId As ResourceIdentifier = Nothing, Optional entry As SparkJobEntry = Nothing, Optional environmentId As ResourceIdentifier = Nothing, Optional inputs As IDictionary(Of String, MachineLearningJobInput) = Nothing, Optional outputs As IDictionary(Of String, MachineLearningJobOutput) = Nothing, Optional pyFiles As IEnumerable(Of String) = Nothing, Optional jars As IEnumerable(Of String) = Nothing, Optional files As IEnumerable(Of String) = Nothing, Optional archives As IEnumerable(Of String) = Nothing, Optional conf As IDictionary(Of String, String) = Nothing, Optional queueJobTier As Nullable(Of JobTier) = Nothing, Optional environmentVariables As IDictionary(Of String, String) = Nothing) As SparkJob
Parameters
- description
- String
The asset description text.
- tags
- IDictionary<String,String>
Tag dictionary. Tags can be added, removed, and updated.
- properties
- IDictionary<String,String>
The asset property dictionary.
- displayName
- String
Display name of job.
- status
- Nullable<MachineLearningJobStatus>
Status of the job.
- experimentName
- String
The name of the experiment the job belongs to. If not set, the job is placed in the "Default" experiment.
- services
- IDictionary<String,MachineLearningJobService>
List of JobEndpoints. For local jobs, a job endpoint will have an endpoint value of FileStreamObject.
- computeId
- ResourceIdentifier
ARM resource ID of the compute resource.
- identity
- MachineLearningIdentityConfiguration
Identity configuration. If set, this should be one of AmlToken, ManagedIdentity, UserIdentity or null. Defaults to AmlToken if null. Please note MachineLearningIdentityConfiguration is the base class. According to the scenario, a derived class of the base class might need to be assigned here, or this property needs to be casted to one of the possible derived classes. The available derived classes include AmlToken, MachineLearningManagedIdentity and MachineLearningUserIdentity.
- componentId
- ResourceIdentifier
ARM resource ID of the component resource.
- notificationSetting
- NotificationSetting
Notification setting for the job.
- resources
- SparkResourceConfiguration
Compute Resource configuration for the job.
- args
- String
Arguments for the job.
- codeId
- ResourceIdentifier
[Required] arm-id of the code asset.
- entry
- SparkJobEntry
[Required] The entry to execute on startup of the job. Please note SparkJobEntry is the base class. According to the scenario, a derived class of the base class might need to be assigned here, or this property needs to be casted to one of the possible derived classes. The available derived classes include SparkJobPythonEntry and SparkJobScalaEntry.
- environmentId
- ResourceIdentifier
The ARM resource ID of the Environment specification for the job.
- inputs
- IDictionary<String,MachineLearningJobInput>
Mapping of input data bindings used in the job. Please note MachineLearningJobInput is the base class. According to the scenario, a derived class of the base class might need to be assigned here, or this property needs to be casted to one of the possible derived classes. The available derived classes include MachineLearningCustomModelJobInput, MachineLearningLiteralJobInput, MachineLearningFlowModelJobInput, MachineLearningTableJobInput, MachineLearningTritonModelJobInput, MachineLearningUriFileJobInput and MachineLearningUriFolderJobInput.
- outputs
- IDictionary<String,MachineLearningJobOutput>
Mapping of output data bindings used in the job. Please note MachineLearningJobOutput is the base class. According to the scenario, a derived class of the base class might need to be assigned here, or this property needs to be casted to one of the possible derived classes. The available derived classes include MachineLearningCustomModelJobOutput, MachineLearningFlowModelJobOutput, MachineLearningTableJobOutput, MachineLearningTritonModelJobOutput, MachineLearningUriFileJobOutput and MachineLearningUriFolderJobOutput.
- pyFiles
- IEnumerable<String>
Python files used in the job.
- jars
- IEnumerable<String>
Jar files used in the job.
- files
- IEnumerable<String>
Files used in the job.
- archives
- IEnumerable<String>
Archive files used in the job.
- conf
- IDictionary<String,String>
Spark configured properties.
- environmentVariables
- IDictionary<String,String>
Environment variables included in the job.
Returns
A new SparkJob instance for mocking.
Applies to
Azure SDK for .NET