你当前正在访问 Microsoft Azure Global Edition 技术文档网站。 如果需要访问由世纪互联运营的 Microsoft Azure 中国技术文档网站,请访问 https://docs.azure.cn

Submit-AzSynapseSparkJob

提交 Synapse Analytics Spark 作业。

语法

Submit-AzSynapseSparkJob
      -WorkspaceName <String>
      -SparkPoolName <String>
      -Language <String>
      -Name <String>
      -MainDefinitionFile <String>
      [-MainClassName <String>]
      [-CommandLineArgument <String[]>]
      [-ReferenceFile <String[]>]
      -ExecutorCount <Int32>
      -ExecutorSize <String>
      [-Configuration <Hashtable>]
      [-DefaultProfile <IAzureContextContainer>]
      [-WhatIf]
      [-Confirm]
      [<CommonParameters>]
Submit-AzSynapseSparkJob
      -SparkPoolObject <PSSynapseSparkPool>
      -Language <String>
      -Name <String>
      -MainDefinitionFile <String>
      [-MainClassName <String>]
      [-CommandLineArgument <String[]>]
      [-ReferenceFile <String[]>]
      -ExecutorCount <Int32>
      -ExecutorSize <String>
      [-Configuration <Hashtable>]
      [-DefaultProfile <IAzureContextContainer>]
      [-WhatIf]
      [-Confirm]
      [<CommonParameters>]

说明

Submit-AzSynapseSparkJob cmdlet 提交 Synapse Analytics Spark 作业。

示例

示例 1

Submit-AzSynapseSparkJob -WorkspaceName ContosoWorkspace -SparkPoolName ContosoSparkPool -Language Spark -Name WordCount_Java -MainDefinitionFile abfss://ContosoFileSystem@ContosoGen2Storage.dfs.core.windows.net/samples/java/wordcount/wordcount.jar -MainClassName WordCount -CommandLineArgument abfss://ContosoFileSystem@ContosoGen2Storage.dfs.core.windows.net/samples/java/wordcount/shakespeare.txt,abfss://ContosoFileSystem@ContosoGen2Storage.dfs.core.windows.net/samples/java/wordcount/result/ -ExecutorCount 2 -ExecutorSize Small

此命令提交 Synapse Analytics Spark 作业。

示例 2

Submit-AzSynapseSparkJob -WorkspaceName ContosoWorkspace -SparkPoolName ContosoSparkPool -Language SparkDotNet -Name WordCount_Dotnet -MainDefinitionFile abfss://ContosoFileSystem@ContosoGen2Storage.dfs.core.windows.net/samples/dotnet/wordcount/wordcount.zip -MainExecutableFile WordCount -CommandLineArgument abfss://ContosoFileSystem@ContosoGen2Storage.dfs.core.windows.net/samples/dotnet/wordcount/shakespeare.txt,abfss://ContosoFileSystem@ContosoGen2Storage.dfs.core.windows.net/samples/dotnet/wordcount/result -ExecutorCount 2 -ExecutorSize Small

此命令提交 Synapse Analytics Spark .NET 作业。

示例 3

Submit-AzSynapseSparkJob -WorkspaceName ContosoWorkspace -SparkPoolName ContosoSparkPool -Language PySpark -Name WordCount_Python -MainDefinitionFile abfss://ContosoFileSystem@ContosoGen2Storage.blob.core.windows.net/samples/python/wordcount/wordcount.py -CommandLineArgument abfss://ContosoFileSystem@ContosoGen2Storage.blob.core.windows.net/samples/python/wordcount/shakespeare.txt,abfss://ContosoFileSystem@ContosoGen2Storage.blob.core.windows.net/samples/python/wordcount/result/ -ExecutorCount 2 -ExecutorSize Small

此命令提交 Synapse Analytics PySpark 作业。

参数

-CommandLineArgument

作业的可选参数。 例如“--iteration 10000 --timeout 20s”

Type:String[]
Position:Named
Default value:None
Required:False
Accept pipeline input:False
Accept wildcard characters:False

-Configuration

Spark 配置属性。

Type:Hashtable
Position:Named
Default value:None
Required:False
Accept pipeline input:False
Accept wildcard characters:False

-Confirm

提示你在运行 cmdlet 之前进行确认。

Type:SwitchParameter
Aliases:cf
Position:Named
Default value:None
Required:False
Accept pipeline input:False
Accept wildcard characters:False

-DefaultProfile

用于与 Azure 通信的凭据、帐户、租户和订阅。

Type:IAzureContextContainer
Aliases:AzContext, AzureRmContext, AzureCredential
Position:Named
Default value:None
Required:False
Accept pipeline input:False
Accept wildcard characters:False

-ExecutorCount

在作业的指定 Spark 池中分配的执行程序数。

Type:Int32
Position:Named
Default value:None
Required:True
Accept pipeline input:False
Accept wildcard characters:False

-ExecutorSize

要用于作业的指定 Spark 池中分配的执行程序的核心和内存数。

Type:String
Accepted values:Small, Medium, Large, XLarge, XXLarge, XXXLarge
Position:Named
Default value:None
Required:True
Accept pipeline input:False
Accept wildcard characters:False

-Language

要提交的作业的语言。

Type:String
Accepted values:Spark, Scala, PySpark, Python, SparkDotNet, CSharp
Position:Named
Default value:None
Required:True
Accept pipeline input:False
Accept wildcard characters:False

-MainClassName

主定义文件中的完全限定标识符或主类。 Spark 和 .NET Spark 作业是必需的。 例如“org.apache.spark.examples.SparkPi”

Type:String
Aliases:MainExecutableFile
Position:Named
Default value:None
Required:False
Accept pipeline input:False
Accept wildcard characters:False

-MainDefinitionFile

用于作业的主文件。 例如“abfss://filesystem@account.dfs.core.windows.net/mySpark.jar”

Type:String
Position:Named
Default value:None
Required:True
Accept pipeline input:False
Accept wildcard characters:False

-Name

Spark 作业的名称。

Type:String
Position:Named
Default value:None
Required:True
Accept pipeline input:False
Accept wildcard characters:False

-ReferenceFile

用于主定义文件中的引用的其他文件。 逗号分隔的存储 URI 列表。 例如“abfss://filesystem@account.dfs.core.windows.net/file1.txt,abfss:///filesystem@account.dfs.core.windows.netresult/”

Type:String[]
Position:Named
Default value:None
Required:False
Accept pipeline input:False
Accept wildcard characters:False

-SparkPoolName

Synapse Spark 池的名称。

Type:String
Position:Named
Default value:None
Required:True
Accept pipeline input:False
Accept wildcard characters:False

-SparkPoolObject

Spark 池输入对象,通常通过管道传递。

Type:PSSynapseSparkPool
Position:Named
Default value:None
Required:True
Accept pipeline input:True
Accept wildcard characters:False

-WhatIf

显示运行该 cmdlet 时会发生什么情况。 cmdlet 未运行。

Type:SwitchParameter
Aliases:wi
Position:Named
Default value:None
Required:False
Accept pipeline input:False
Accept wildcard characters:False

-WorkspaceName

Synapse 工作区的名称。

Type:String
Position:Named
Default value:None
Required:True
Accept pipeline input:False
Accept wildcard characters:False

输入

PSSynapseSparkPool

输出

PSSynapseSparkJob