Dijeli putem


az hdinsight-on-aks cluster

Note

This reference is part of the hdinsightonaks extension for the Azure CLI (version 2.57.0 or higher). The extension will automatically install the first time you run an az hdinsight-on-aks cluster command. Learn more about extensions.

This command group is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus

Cluster operations.

Commands

Name Description Type Status
az hdinsight-on-aks cluster create

Create a cluster.

Extension Preview
az hdinsight-on-aks cluster delete

Delete a cluster.

Extension Preview
az hdinsight-on-aks cluster instance-view

Gets the status of a cluster instances.

Extension Preview
az hdinsight-on-aks cluster instance-view list

List the lists of instance views.

Extension Preview
az hdinsight-on-aks cluster instance-view show

Get the status of a cluster instance.

Extension Preview
az hdinsight-on-aks cluster job

Cluster job operations.

Extension Preview
az hdinsight-on-aks cluster job list

List jobs of HDInsight on AKS cluster.

Extension Preview
az hdinsight-on-aks cluster job run

Operations on jobs of HDInsight on AKS cluster.

Extension Preview
az hdinsight-on-aks cluster library

Manage the library of the cluster.

Extension Preview
az hdinsight-on-aks cluster library list

List all libraries of HDInsight on AKS cluster.

Extension Preview
az hdinsight-on-aks cluster library manage

Library management operations on HDInsight on AKS cluster.

Extension Preview
az hdinsight-on-aks cluster list

List the HDInsight cluster pools under a resource group.

Extension Preview
az hdinsight-on-aks cluster list-service-config

List the config dump of all services running in cluster.

Extension Preview
az hdinsight-on-aks cluster node-profile

Manage compute node profile.

Extension Preview
az hdinsight-on-aks cluster node-profile create

Create a node profile with SKU and worker count.

Extension Preview
az hdinsight-on-aks cluster resize

Resize an existing Cluster.

Extension Preview
az hdinsight-on-aks cluster show

Get a HDInsight cluster.

Extension Preview
az hdinsight-on-aks cluster update

Update a cluster.

Extension Preview
az hdinsight-on-aks cluster upgrade

Upgrade cluster.

Extension Preview
az hdinsight-on-aks cluster upgrade history

List a list of upgrade history.

Extension Preview
az hdinsight-on-aks cluster upgrade list

List a cluster available upgrades.

Extension Preview
az hdinsight-on-aks cluster upgrade rollback

Manual rollback upgrade for a cluster.

Extension Preview
az hdinsight-on-aks cluster upgrade run

Upgrade a cluster.

Extension Preview
az hdinsight-on-aks cluster wait

Place the CLI in a waiting state until a condition is met.

Extension Preview

az hdinsight-on-aks cluster create

Preview

Command group 'az hdinsight-on-aks cluster' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus

Create a cluster.

az hdinsight-on-aks cluster create --cluster-name
                                   --cluster-pool-name
                                   --resource-group
                                   [--application-log-std-error-enabled {0, 1, f, false, n, no, t, true, y, yes}]
                                   [--application-log-std-out-enabled {0, 1, f, false, n, no, t, true, y, yes}]
                                   [--assigned-identity-client-id]
                                   [--assigned-identity-id]
                                   [--assigned-identity-object-id]
                                   [--authorization-group-id]
                                   [--authorization-user-id]
                                   [--autoscale-profile-graceful-decommission-timeout]
                                   [--autoscale-profile-type {LoadBased, ScheduleBased}]
                                   [--availability-zones]
                                   [--cluster-type]
                                   [--cluster-version]
                                   [--cooldown-period]
                                   [--coord-debug-port]
                                   [--coord-debug-suspend {0, 1, f, false, n, no, t, true, y, yes}]
                                   [--coordinator-debug-enabled {0, 1, f, false, n, no, t, true, y, yes}]
                                   [--coordinator-high-availability-enabled {0, 1, f, false, n, no, t, true, y, yes}]
                                   [--db-connection-authentication-mode {IdentityAuth, SqlAuth}]
                                   [--deployment-mode {Application, Session}]
                                   [--enable-autoscale {0, 1, f, false, n, no, t, true, y, yes}]
                                   [--enable-la-metrics {0, 1, f, false, n, no, t, true, y, yes}]
                                   [--enable-log-analytics {0, 1, f, false, n, no, t, true, y, yes}]
                                   [--enable-prometheu {0, 1, f, false, n, no, t, true, y, yes}]
                                   [--enable-worker-debug {0, 1, f, false, n, no, t, true, y, yes}]
                                   [--flink-db-auth-mode {IdentityAuth, SqlAuth}]
                                   [--flink-hive-catalog-db-connection-password-secret]
                                   [--flink-hive-catalog-db-connection-url]
                                   [--flink-hive-catalog-db-connection-user-name]
                                   [--flink-storage-key]
                                   [--flink-storage-uri]
                                   [--history-server-cpu]
                                   [--history-server-memory]
                                   [--identity-list]
                                   [--internal-ingress {0, 1, f, false, n, no, t, true, y, yes}]
                                   [--job-manager-cpu]
                                   [--job-manager-memory]
                                   [--job-spec]
                                   [--kafka-profile]
                                   [--key-vault-id]
                                   [--llap-profile]
                                   [--loadbased-config-max-nodes]
                                   [--loadbased-config-min-nodes]
                                   [--loadbased-config-poll-interval]
                                   [--loadbased-config-scaling-rules]
                                   [--location]
                                   [--no-wait {0, 1, f, false, n, no, t, true, y, yes}]
                                   [--nodes]
                                   [--num-replicas]
                                   [--oss-version]
                                   [--ranger-plugin-profile]
                                   [--ranger-profile]
                                   [--schedule-based-config-default-count]
                                   [--schedule-based-config-schedule]
                                   [--schedule-based-config-time-zone]
                                   [--script-action-profiles]
                                   [--secret-reference]
                                   [--service-configs]
                                   [--spark-hive-catalog-db-name]
                                   [--spark-hive-catalog-db-password-secret]
                                   [--spark-hive-catalog-db-server-name]
                                   [--spark-hive-catalog-db-user-name]
                                   [--spark-hive-catalog-key-vault-id]
                                   [--spark-hive-catalog-thrift-url]
                                   [--spark-storage-url]
                                   [--ssh-profile-count]
                                   [--stub-profile]
                                   [--tags]
                                   [--task-manager-cpu]
                                   [--task-manager-memory]
                                   [--trino-hive-catalog]
                                   [--trino-plugins-spec]
                                   [--trino-profile-user-plugins-telemetry-spec]
                                   [--user-plugins-spec]
                                   [--vm-size]
                                   [--worker-debug-port]
                                   [--worker-debug-suspend {0, 1, f, false, n, no, t, true, y, yes}]

Examples

Create a simple Trino cluster.

az az hdinsight-on-aks cluster create -n {clustername} --cluster-pool-name {clusterpoolname} -g {resourcesGroup} -l {location}--cluster-type trino --cluster-version {1.2.0} --oss-version {0.440.0} --node '[{"count":2,"type":"worker","vm-size":"Standard_D8d_v5"}]' --identity-list '[{"client-id":"00000000-0000-0000-0000-000000000000","object-id":"00000000-0000-0000-0000-000000000000","resource-id":"/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resourcesGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/yourmsi","type":"cluster"}]' --authorization-user-id "00000000-0000-0000-0000-000000000000"

Create a simple Flink cluster.

az hdinsight-on-aks cluster create -n {clustername} --cluster-pool-name {clusterpoolname} -g {resourcesGroup} -l {location}--cluster-type flink --flink-storage-uri {abfs://container@yourstorage.dfs.core.windows.net/} --cluster-version {1.2.0} --oss-version {1.17.0} --node '[{"count":5,"type":"worker","vm-size":"Standard_D8d_v5"}]' --identity-list '[{"client-id":"00000000-0000-0000-0000-000000000000","object-id":"00000000-0000-0000-0000-000000000000","resource-id":"/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resourcesGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/yourmsi","type":"cluster"}]' --authorization-user-id "00000000-0000-0000-0000-000000000000" --job-manager-cpu {1} --job-manager-memory {2000} --task-manager-cpu {6} --task-manager-memory {49016}

Create a simple Spark cluster.

az hdinsight-on-aks cluster create -n {clustername} --cluster-pool-name {clusterpoolname} -g {resourcesGroup} -l {location}--cluster-type spark --spark-storage-url {abfs://container@yourstorage.dfs.core.windows.net/} --cluster-version {1.2.0} --oss-version {3.4.1} --node '[{"count":2,"type":"worker","vm-size":"Standard_D8d_v5"}]' --identity-list '[{"client-id":"00000000-0000-0000-0000-000000000000","object-id":"00000000-0000-0000-0000-000000000000","resource-id":"/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resourcesGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/yourmsi","type":"cluster"}]' --authorization-user-id "00000000-0000-0000-0000-000000000000"

Create a simple Kafka cluster.

az az hdinsight-on-aks cluster create -n {clustername} --cluster-pool-name {clusterpoolname} -g {resourcesGroup} -l {location}--cluster-type kafka --cluster-version {1.2.0} --oss-version {3.6.0} --node '[{"count":2,"type":"worker","vm-size":"Standard_D8d_v5"}]' --identity-list '[{"client-id":"00000000-0000-0000-0000-000000000000","object-id":"00000000-0000-0000-0000-000000000000","resource-id":"/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resourcesGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/yourmsi","type":"cluster"}]' --authorization-user-id "00000000-0000-0000-0000-000000000000" --kafka-profile '{"disk-storage":{"data-disk-size":8,"data-disk-type":"Standard_SSD_LRS"}}'

Create a Spark cluster with custom hive metastore.

az hdinsight-on-aks cluster create -n {clustername} --cluster-pool-name {clusterpoolname} -g {resourcesGroup} -l {location}--cluster-type spark --spark-storage-url {abfs://container@yourstorage.dfs.core.windows.net/} --cluster-version {1.2.0} --oss-version {3.4.1} --node '[{"count":2,"type":"worker","vm-size":"Standard_D8d_v5"}]' --identity-list '[{"client-id":"00000000-0000-0000-0000-000000000000","object-id":"00000000-0000-0000-0000-000000000000","resource-id":"/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resourcesGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/yourmsi","type":"cluster"}]' --authorization-user-id "00000000-0000-0000-0000-000000000000"  --secret-reference '[{reference-name:sqlpassword,secret-name:sqlpassword,type:Secret}]' --key-vault-id /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resourcesGroup/providers/Microsoft.KeyVault/vaults/CLIKV --spark-hive-kv-id /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resourcesGroup/providers/Microsoft.KeyVault/vaults/CLIKV --spark-db-auth-mode SqlAuth --spark-hive-db-name {sparkhms} --spark-hive-db-secret {sqlpassword} --spark-hive-db-server {yourserver.database.windows.net} --spark-hive-db-user {username}

Create a Flink cluster with availability zones.

az hdinsight-on-aks cluster create -n {clustername} --cluster-pool-name {clusterpoolname} -g {resourcesGroup} -l {location}--cluster-type flink --flink-storage-uri {abfs://container@yourstorage.dfs.core.windows.net/} --cluster-version {1.2.0} --oss-version {1.17.0} --node '[{"count":5,"type":"worker","vm-size":"Standard_D8d_v5"}]' --identity-list '[{"client-id":"00000000-0000-0000-0000-000000000000","object-id":"00000000-0000-0000-0000-000000000000","resource-id":"/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resourcesGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/yourmsi","type":"cluster"}]' --authorization-user-id "00000000-0000-0000-0000-000000000000" --job-manager-cpu {1} --job-manager-memory {2000} --task-manager-cpu {6} --task-manager-memory {49016} --availability-zones [1,2]

Required Parameters

--cluster-name --name -n

The name of the HDInsight cluster.

--cluster-pool-name

The name of the cluster pool.

--resource-group -g

Name of resource group. You can configure the default group using az configure --defaults group=<name>.

Optional Parameters

--application-log-std-error-enabled --enable-log-std-error

True if application standard error is enabled, otherwise false.

Accepted values: 0, 1, f, false, n, no, t, true, y, yes
--application-log-std-out-enabled --enable-log-std-out

True if application standard out is enabled, otherwise false.

Accepted values: 0, 1, f, false, n, no, t, true, y, yes
--assigned-identity-client-id --msi-client-id

ClientId of the MSI.

--assigned-identity-id --msi-id

ResourceId of the MSI.

--assigned-identity-object-id --msi-object-id

ObjectId of the MSI.

--authorization-group-id

AAD group Ids authorized for data plane access. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.

--authorization-user-id

AAD user Ids authorized for data plane access. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.

--autoscale-profile-graceful-decommission-timeout --decommission-time

This property is for graceful decommission timeout; It has a default setting of 3600 seconds before forced shutdown takes place. This is the maximal time to wait for running containers and applications to complete before transition a DECOMMISSIONING node into DECOMMISSIONED. The default value is 3600 seconds. Negative value (like -1) is handled as infinite timeout.

--autoscale-profile-type

User to specify which type of Autoscale to be implemented - Scheduled Based or Load Based.

Accepted values: LoadBased, ScheduleBased
--availability-zones

The list of Availability zones to use for AKS VMSS nodes. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.

--cluster-type

The type of cluster.

--cluster-version

Version with 3/4 part.

--cooldown-period --loadbased-config-cooldown-period

This is a cool down period, this is a time period in seconds, which determines the amount of time that must elapse between a scaling activity started by a rule and the start of the next scaling activity, regardless of the rule that triggers it. The default value is 300 seconds.

--coord-debug-port --coordinator-debug-port

The flag that if enable debug or not. Default: 8008.

--coord-debug-suspend --coordinator-debug-suspend

The flag that if suspend debug or not. Default: false.

Accepted values: 0, 1, f, false, n, no, t, true, y, yes
--coordinator-debug-enabled --enable-coord-debug

The flag that if enable coordinator HA, uses multiple coordinator replicas with auto failover, one per each head node. Default: false.

Accepted values: 0, 1, f, false, n, no, t, true, y, yes
--coordinator-high-availability-enabled --enable-coord-ha

The flag that if enable coordinator HA, uses multiple coordinator replicas with auto failover, one per each head node. Default: false.

Accepted values: 0, 1, f, false, n, no, t, true, y, yes
--db-connection-authentication-mode --spark-db-auth-mode

The authentication mode to connect to your Hive metastore database. More details: https://learn.microsoft.com/en-us/azure/azure-sql/database/logins-create-manage?view=azuresql#authentication-and-authorization.

Accepted values: IdentityAuth, SqlAuth
--deployment-mode

A string property that indicates the deployment mode of Flink cluster. It can have one of the following enum values => Application, Session. Default value is Session.

Accepted values: Application, Session
--enable-autoscale

This indicates whether auto scale is enabled on HDInsight on AKS cluster.

Accepted values: 0, 1, f, false, n, no, t, true, y, yes
--enable-la-metrics --log-analytic-profile-metrics-enabled

True if metrics are enabled, otherwise false.

Accepted values: 0, 1, f, false, n, no, t, true, y, yes
--enable-log-analytics

True if log analytics is enabled for the cluster, otherwise false.

Accepted values: 0, 1, f, false, n, no, t, true, y, yes
--enable-prometheu

Enable Prometheus for cluster or not.

Accepted values: 0, 1, f, false, n, no, t, true, y, yes
Default value: False
--enable-worker-debug

The flag that if trino cluster enable debug or not. Default: false.

Accepted values: 0, 1, f, false, n, no, t, true, y, yes
--flink-db-auth-mode --metastore-db-connection-authentication-mode

The authentication mode to connect to your Hive metastore database. More details: https://learn.microsoft.com/en-us/azure/azure-sql/database/logins-create-manage?view=azuresql#authentication-and-authorization.

Accepted values: IdentityAuth, SqlAuth
--flink-hive-catalog-db-connection-password-secret --flink-hive-db-secret

Secret reference name from secretsProfile.secrets containing password for database connection.

--flink-hive-catalog-db-connection-url --flink-hive-db-url

Connection string for hive metastore database.

--flink-hive-catalog-db-connection-user-name --flink-hive-db-user

User name for database connection.

--flink-storage-key

Storage key is only required for wasb(s) storage.

--flink-storage-uri

Storage account uri which is used for savepoint and checkpoint state.

--history-server-cpu

History server CPU count.

--history-server-memory

History server memory size.

--identity-list

The list of managed identity. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.

--internal-ingress --internal-ingress-enabled

Whether to create cluster using private IP instead of public IP. This property must be set at create time.

Accepted values: 0, 1, f, false, n, no, t, true, y, yes
--job-manager-cpu

Job manager CPU count.

--job-manager-memory

Job manager memory size.

--job-spec

Job specifications for flink clusters in application deployment mode. The specification is immutable even if job properties are changed by calling the RunJob API, please use the ListJob API to get the latest job information. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.

--kafka-profile

Kafka cluster profile. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.

--key-vault-id

Name of the user Key Vault where all the cluster specific user secrets are stored.

--llap-profile

LLAP cluster profile. Support json-file and yaml-file.

--loadbased-config-max-nodes --loadbased-max-nodes

User needs to set the maximum number of nodes for load based scaling, the load based scaling will use this to scale up and scale down between minimum and maximum number of nodes.

--loadbased-config-min-nodes --loadbased-min-nodes

User needs to set the minimum number of nodes for load based scaling, the load based scaling will use this to scale up and scale down between minimum and maximum number of nodes.

--loadbased-config-poll-interval --loadbased-interval

User can specify the poll interval, this is the time period (in seconds) after which scaling metrics are polled for triggering a scaling operation.

--loadbased-config-scaling-rules --loadbased-rules

The scaling rules. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.

--location -l

The geo-location where the resource lives When not specified, the location of the resource group will be used.

--no-wait

Do not wait for the long-running operation to finish.

Accepted values: 0, 1, f, false, n, no, t, true, y, yes
--nodes

The nodes definitions. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.

--num-replicas

The number of task managers.

--oss-version

Version with three part.

--ranger-plugin-profile

Cluster Ranger plugin profile. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.

--ranger-profile

The ranger cluster profile. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.

--schedule-based-config-default-count --schedule-default-count

Setting default node count of current schedule configuration. Default node count specifies the number of nodes which are default when an specified scaling operation is executed (scale up/scale down).

--schedule-based-config-schedule --schedule-schedules

This specifies the schedules where scheduled based Autoscale to be enabled, the user has a choice to set multiple rules within the schedule across days and times (start/end). Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.

--schedule-based-config-time-zone --schedule-time-zone

User has to specify the timezone on which the schedule has to be set for schedule based autoscale configuration.

--script-action-profiles

The script action profile list. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.

--secret-reference

Properties of Key Vault secret. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.

--service-configs --service-configs-profiles

The service configs profiles. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.

--spark-hive-catalog-db-name --spark-hive-db-name

The database name.

--spark-hive-catalog-db-password-secret --spark-hive-db-secret

The secret name which contains the database user password.

--spark-hive-catalog-db-server-name --spark-hive-db-server

The database server host.

--spark-hive-catalog-db-user-name --spark-hive-db-user

The database user name.

--spark-hive-catalog-key-vault-id --spark-hive-kv-id

The key vault resource id.

--spark-hive-catalog-thrift-url --spark-hive-thrift-url

The thrift url.

--spark-storage-url

The default storage URL.

--ssh-profile-count

Number of ssh pods per cluster.

--stub-profile

Stub cluster profile. Support json-file and yaml-file.

--tags

Resource tags. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.

--task-manager-cpu

Task manager CPU count.

--task-manager-memory

The task manager memory size.

--trino-hive-catalog

Trino cluster hive catalog options. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.

--trino-plugins-spec --trino-profile-user-plugins-plugin-spec

Trino user plugins spec Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.

--trino-profile-user-plugins-telemetry-spec --trino-telemetry-spec

Trino user telemetry spec. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.

--user-plugins-spec

Spark user plugins spec Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.

--vm-size

The virtual machine SKU.

--worker-debug-port

The debug port. Default: 8008.

--worker-debug-suspend

The flag that if trino cluster suspend debug or not. Default: false.

Accepted values: 0, 1, f, false, n, no, t, true, y, yes
Global Parameters
--debug

Increase logging verbosity to show all debug logs.

--help -h

Show this help message and exit.

--only-show-errors

Only show errors, suppressing warnings.

--output -o

Output format.

Accepted values: json, jsonc, none, table, tsv, yaml, yamlc
Default value: json
--query

JMESPath query string. See http://jmespath.org/ for more information and examples.

--subscription

Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID.

--verbose

Increase logging verbosity. Use --debug for full debug logs.

az hdinsight-on-aks cluster delete

Preview

Command group 'az hdinsight-on-aks cluster' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus

Delete a cluster.

az hdinsight-on-aks cluster delete [--cluster-name]
                                   [--cluster-pool-name]
                                   [--ids]
                                   [--no-wait {0, 1, f, false, n, no, t, true, y, yes}]
                                   [--resource-group]
                                   [--subscription]
                                   [--yes]

Examples

Delete a cluster.

az hdinsight-on-aks cluster delete  -n {clusterName} --cluster-pool-name {poolName} -g {RG}

Optional Parameters

--cluster-name --name -n

The name of the HDInsight cluster.

--cluster-pool-name

The name of the cluster pool.

--ids

One or more resource IDs (space-delimited). It should be a complete resource ID containing all information of 'Resource Id' arguments. You should provide either --ids or other 'Resource Id' arguments.

--no-wait

Do not wait for the long-running operation to finish.

Accepted values: 0, 1, f, false, n, no, t, true, y, yes
--resource-group -g

Name of resource group. You can configure the default group using az configure --defaults group=<name>.

--subscription

Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID.

--yes -y

Do not prompt for confirmation.

Default value: False
Global Parameters
--debug

Increase logging verbosity to show all debug logs.

--help -h

Show this help message and exit.

--only-show-errors

Only show errors, suppressing warnings.

--output -o

Output format.

Accepted values: json, jsonc, none, table, tsv, yaml, yamlc
Default value: json
--query

JMESPath query string. See http://jmespath.org/ for more information and examples.

--subscription

Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID.

--verbose

Increase logging verbosity. Use --debug for full debug logs.

az hdinsight-on-aks cluster list

Preview

Command group 'az hdinsight-on-aks cluster' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus

List the HDInsight cluster pools under a resource group.

az hdinsight-on-aks cluster list --cluster-pool-name
                                 --resource-group
                                 [--max-items]
                                 [--next-token]

Examples

List all cluster in a cluster pool.

az hdinsight-on-aks cluster list --cluster-pool-name {poolName}-g {RG}

Required Parameters

--cluster-pool-name

The name of the cluster pool.

--resource-group -g

Name of resource group. You can configure the default group using az configure --defaults group=<name>.

Optional Parameters

--max-items

Total number of items to return in the command's output. If the total number of items available is more than the value specified, a token is provided in the command's output. To resume pagination, provide the token value in --next-token argument of a subsequent command.

--next-token

Token to specify where to start paginating. This is the token value from a previously truncated response.

Global Parameters
--debug

Increase logging verbosity to show all debug logs.

--help -h

Show this help message and exit.

--only-show-errors

Only show errors, suppressing warnings.

--output -o

Output format.

Accepted values: json, jsonc, none, table, tsv, yaml, yamlc
Default value: json
--query

JMESPath query string. See http://jmespath.org/ for more information and examples.

--subscription

Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID.

--verbose

Increase logging verbosity. Use --debug for full debug logs.

az hdinsight-on-aks cluster list-service-config

Preview

Command group 'az hdinsight-on-aks cluster' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus

List the config dump of all services running in cluster.

az hdinsight-on-aks cluster list-service-config --cluster-name
                                                --cluster-pool-name
                                                --resource-group
                                                [--max-items]
                                                [--next-token]

Examples

Lists the config dump of all services running in cluster.

az hdinsight-on-aks cluster list-service-config  --cluster-name {clusterName} --cluster-pool-name {poolName}-g {RG}

Required Parameters

--cluster-name

The name of the HDInsight cluster.

--cluster-pool-name

The name of the cluster pool.

--resource-group -g

Name of resource group. You can configure the default group using az configure --defaults group=<name>.

Optional Parameters

--max-items

Total number of items to return in the command's output. If the total number of items available is more than the value specified, a token is provided in the command's output. To resume pagination, provide the token value in --next-token argument of a subsequent command.

--next-token

Token to specify where to start paginating. This is the token value from a previously truncated response.

Global Parameters
--debug

Increase logging verbosity to show all debug logs.

--help -h

Show this help message and exit.

--only-show-errors

Only show errors, suppressing warnings.

--output -o

Output format.

Accepted values: json, jsonc, none, table, tsv, yaml, yamlc
Default value: json
--query

JMESPath query string. See http://jmespath.org/ for more information and examples.

--subscription

Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID.

--verbose

Increase logging verbosity. Use --debug for full debug logs.

az hdinsight-on-aks cluster resize

Preview

Command group 'az hdinsight-on-aks cluster' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus

Resize an existing Cluster.

az hdinsight-on-aks cluster resize [--cluster-name]
                                   [--cluster-pool-name]
                                   [--ids]
                                   [--location]
                                   [--no-wait {0, 1, f, false, n, no, t, true, y, yes}]
                                   [--resource-group]
                                   [--subscription]
                                   [--tags]
                                   [--target-worker-node-count]

Examples

Resize a cluster.

az hdinsight-on-aks cluster resize --cluster-name {clusterName} --cluster-pool-name {poolName}-g {RG} -l {westus3} --target-worker-node-count {6}

Optional Parameters

--cluster-name

The name of the HDInsight cluster.

--cluster-pool-name

The name of the cluster pool.

--ids

One or more resource IDs (space-delimited). It should be a complete resource ID containing all information of 'Resource Id' arguments. You should provide either --ids or other 'Resource Id' arguments.

--location -l

The geo-location where the resource lives When not specified, the location of the resource group will be used.

--no-wait

Do not wait for the long-running operation to finish.

Accepted values: 0, 1, f, false, n, no, t, true, y, yes
--resource-group -g

Name of resource group. You can configure the default group using az configure --defaults group=<name>.

--subscription

Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID.

--tags

Resource tags. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.

--target-worker-node-count --worker-node-count

Target node count of worker node.

Global Parameters
--debug

Increase logging verbosity to show all debug logs.

--help -h

Show this help message and exit.

--only-show-errors

Only show errors, suppressing warnings.

--output -o

Output format.

Accepted values: json, jsonc, none, table, tsv, yaml, yamlc
Default value: json
--query

JMESPath query string. See http://jmespath.org/ for more information and examples.

--subscription

Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID.

--verbose

Increase logging verbosity. Use --debug for full debug logs.

az hdinsight-on-aks cluster show

Preview

Command group 'az hdinsight-on-aks cluster' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus

Get a HDInsight cluster.

az hdinsight-on-aks cluster show [--cluster-name]
                                 [--cluster-pool-name]
                                 [--ids]
                                 [--resource-group]
                                 [--subscription]

Examples

Get a cluster with cluster name.

az hdinsight-on-aks cluster show  -n {clusterName} --cluster-pool-name {poolName} -g {RG}

Optional Parameters

--cluster-name --name -n

The name of the HDInsight cluster.

--cluster-pool-name

The name of the cluster pool.

--ids

One or more resource IDs (space-delimited). It should be a complete resource ID containing all information of 'Resource Id' arguments. You should provide either --ids or other 'Resource Id' arguments.

--resource-group -g

Name of resource group. You can configure the default group using az configure --defaults group=<name>.

--subscription

Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID.

Global Parameters
--debug

Increase logging verbosity to show all debug logs.

--help -h

Show this help message and exit.

--only-show-errors

Only show errors, suppressing warnings.

--output -o

Output format.

Accepted values: json, jsonc, none, table, tsv, yaml, yamlc
Default value: json
--query

JMESPath query string. See http://jmespath.org/ for more information and examples.

--subscription

Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID.

--verbose

Increase logging verbosity. Use --debug for full debug logs.

az hdinsight-on-aks cluster update

Preview

Command group 'az hdinsight-on-aks cluster' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus

Update a cluster.

az hdinsight-on-aks cluster update [--add]
                                   [--application-log-std-error-enabled {0, 1, f, false, n, no, t, true, y, yes}]
                                   [--application-log-std-out-enabled {0, 1, f, false, n, no, t, true, y, yes}]
                                   [--assigned-identity-client-id]
                                   [--assigned-identity-id]
                                   [--assigned-identity-object-id]
                                   [--authorization-group-id]
                                   [--authorization-user-id]
                                   [--autoscale-profile-graceful-decommission-timeout]
                                   [--autoscale-profile-type {LoadBased, ScheduleBased}]
                                   [--availability-zones]
                                   [--cluster-name]
                                   [--cluster-pool-name]
                                   [--cluster-version]
                                   [--cooldown-period]
                                   [--coord-debug-port]
                                   [--coord-debug-suspend {0, 1, f, false, n, no, t, true, y, yes}]
                                   [--coordinator-debug-enabled {0, 1, f, false, n, no, t, true, y, yes}]
                                   [--coordinator-high-availability-enabled {0, 1, f, false, n, no, t, true, y, yes}]
                                   [--db-connection-authentication-mode {IdentityAuth, SqlAuth}]
                                   [--deployment-mode {Application, Session}]
                                   [--enable-autoscale {0, 1, f, false, n, no, t, true, y, yes}]
                                   [--enable-la-metrics {0, 1, f, false, n, no, t, true, y, yes}]
                                   [--enable-log-analytics {0, 1, f, false, n, no, t, true, y, yes}]
                                   [--enable-prometheu {0, 1, f, false, n, no, t, true, y, yes}]
                                   [--enable-worker-debug {0, 1, f, false, n, no, t, true, y, yes}]
                                   [--flink-db-auth-mode {IdentityAuth, SqlAuth}]
                                   [--flink-hive-catalog-db-connection-password-secret]
                                   [--flink-hive-catalog-db-connection-url]
                                   [--flink-hive-catalog-db-connection-user-name]
                                   [--flink-storage-key]
                                   [--flink-storage-uri]
                                   [--force-string {0, 1, f, false, n, no, t, true, y, yes}]
                                   [--history-server-cpu]
                                   [--history-server-memory]
                                   [--identity-list]
                                   [--ids]
                                   [--job-manager-cpu]
                                   [--job-manager-memory]
                                   [--job-spec]
                                   [--kafka-profile]
                                   [--key-vault-id]
                                   [--llap-profile]
                                   [--loadbased-config-max-nodes]
                                   [--loadbased-config-min-nodes]
                                   [--loadbased-config-poll-interval]
                                   [--loadbased-config-scaling-rules]
                                   [--no-wait {0, 1, f, false, n, no, t, true, y, yes}]
                                   [--nodes]
                                   [--num-replicas]
                                   [--oss-version]
                                   [--ranger-plugin-profile]
                                   [--ranger-profile]
                                   [--remove]
                                   [--resource-group]
                                   [--schedule-based-config-default-count]
                                   [--schedule-based-config-schedule]
                                   [--schedule-based-config-time-zone]
                                   [--script-action-profiles]
                                   [--secret-reference]
                                   [--service-configs]
                                   [--set]
                                   [--spark-hive-catalog-db-name]
                                   [--spark-hive-catalog-db-password-secret]
                                   [--spark-hive-catalog-db-server-name]
                                   [--spark-hive-catalog-db-user-name]
                                   [--spark-hive-catalog-key-vault-id]
                                   [--spark-hive-catalog-thrift-url]
                                   [--spark-storage-url]
                                   [--ssh-profile-count]
                                   [--stub-profile]
                                   [--subscription]
                                   [--tags]
                                   [--task-manager-cpu]
                                   [--task-manager-memory]
                                   [--trino-hive-catalog]
                                   [--trino-plugins-spec]
                                   [--trino-profile-user-plugins-telemetry-spec]
                                   [--user-plugins-spec]
                                   [--vm-size]
                                   [--worker-debug-port]
                                   [--worker-debug-suspend {0, 1, f, false, n, no, t, true, y, yes}]

Examples

Update a cluster service-config.

az hdinsight-on-aks cluster update -n {clusterName} --cluster-pool-name {poolName} -g {RG} -service-configs {"[{service-name:yarn-service,configs:[{component:hadoop-config-client,files:[{file-name:yarn-site.xml,values:{yarn.nodemanager.resource.memory-mb:33333}}]}]}]"}

Optional Parameters

--add

Add an object to a list of objects by specifying a path and key value pairs. Example: --add property.listProperty <key=value, string or JSON string>.

--application-log-std-error-enabled --enable-log-std-error

True if application standard error is enabled, otherwise false.

Accepted values: 0, 1, f, false, n, no, t, true, y, yes
--application-log-std-out-enabled --enable-log-std-out

True if application standard out is enabled, otherwise false.

Accepted values: 0, 1, f, false, n, no, t, true, y, yes
--assigned-identity-client-id --msi-client-id

ClientId of the MSI.

--assigned-identity-id --msi-id

ResourceId of the MSI.

--assigned-identity-object-id --msi-object-id

ObjectId of the MSI.

--authorization-group-id

AAD group Ids authorized for data plane access. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.

--authorization-user-id

AAD user Ids authorized for data plane access. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.

--autoscale-profile-graceful-decommission-timeout --decommission-time

This property is for graceful decommission timeout; It has a default setting of 3600 seconds before forced shutdown takes place. This is the maximal time to wait for running containers and applications to complete before transition a DECOMMISSIONING node into DECOMMISSIONED. The default value is 3600 seconds. Negative value (like -1) is handled as infinite timeout.

--autoscale-profile-type

User to specify which type of Autoscale to be implemented - Scheduled Based or Load Based.

Accepted values: LoadBased, ScheduleBased
--availability-zones

The list of Availability zones to use for AKS VMSS nodes. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.

--cluster-name --name -n

The name of the HDInsight cluster.

--cluster-pool-name

The name of the cluster pool.

--cluster-version

Version with 3/4 part.

--cooldown-period --loadbased-config-cooldown-period

This is a cool down period, this is a time period in seconds, which determines the amount of time that must elapse between a scaling activity started by a rule and the start of the next scaling activity, regardless of the rule that triggers it. The default value is 300 seconds.

--coord-debug-port --coordinator-debug-port

The flag that if enable debug or not. Default: 8008.

--coord-debug-suspend --coordinator-debug-suspend

The flag that if suspend debug or not. Default: false.

Accepted values: 0, 1, f, false, n, no, t, true, y, yes
--coordinator-debug-enabled --enable-coord-debug

The flag that if enable coordinator HA, uses multiple coordinator replicas with auto failover, one per each head node. Default: false.

Accepted values: 0, 1, f, false, n, no, t, true, y, yes
--coordinator-high-availability-enabled --enable-coord-ha

The flag that if enable coordinator HA, uses multiple coordinator replicas with auto failover, one per each head node. Default: false.

Accepted values: 0, 1, f, false, n, no, t, true, y, yes
--db-connection-authentication-mode --spark-db-auth-mode

The authentication mode to connect to your Hive metastore database. More details: https://learn.microsoft.com/en-us/azure/azure-sql/database/logins-create-manage?view=azuresql#authentication-and-authorization.

Accepted values: IdentityAuth, SqlAuth
--deployment-mode

A string property that indicates the deployment mode of Flink cluster. It can have one of the following enum values => Application, Session. Default value is Session.

Accepted values: Application, Session
--enable-autoscale

This indicates whether auto scale is enabled on HDInsight on AKS cluster.

Accepted values: 0, 1, f, false, n, no, t, true, y, yes
--enable-la-metrics --log-analytic-profile-metrics-enabled

True if metrics are enabled, otherwise false.

Accepted values: 0, 1, f, false, n, no, t, true, y, yes
--enable-log-analytics

True if log analytics is enabled for the cluster, otherwise false.

Accepted values: 0, 1, f, false, n, no, t, true, y, yes
--enable-prometheu

Enable Prometheus for cluster or not.

Accepted values: 0, 1, f, false, n, no, t, true, y, yes
--enable-worker-debug

The flag that if trino cluster enable debug or not. Default: false.

Accepted values: 0, 1, f, false, n, no, t, true, y, yes
--flink-db-auth-mode --metastore-db-connection-authentication-mode

The authentication mode to connect to your Hive metastore database. More details: https://learn.microsoft.com/en-us/azure/azure-sql/database/logins-create-manage?view=azuresql#authentication-and-authorization.

Accepted values: IdentityAuth, SqlAuth
--flink-hive-catalog-db-connection-password-secret --flink-hive-db-secret

Secret reference name from secretsProfile.secrets containing password for database connection.

--flink-hive-catalog-db-connection-url --flink-hive-db-url

Connection string for hive metastore database.

--flink-hive-catalog-db-connection-user-name --flink-hive-db-user

User name for database connection.

--flink-storage-key

Storage key is only required for wasb(s) storage.

--flink-storage-uri

Storage account uri which is used for savepoint and checkpoint state.

--force-string

When using 'set' or 'add', preserve string literals instead of attempting to convert to JSON.

Accepted values: 0, 1, f, false, n, no, t, true, y, yes
--history-server-cpu

History server CPU count.

--history-server-memory

History server memory size.

--identity-list

The list of managed identity. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.

--ids

One or more resource IDs (space-delimited). It should be a complete resource ID containing all information of 'Resource Id' arguments. You should provide either --ids or other 'Resource Id' arguments.

--job-manager-cpu

Job manager CPU count.

--job-manager-memory

Job manager memory size.

--job-spec

Job specifications for flink clusters in application deployment mode. The specification is immutable even if job properties are changed by calling the RunJob API, please use the ListJob API to get the latest job information. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.

--kafka-profile

Kafka cluster profile. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.

--key-vault-id

Name of the user Key Vault where all the cluster specific user secrets are stored.

--llap-profile

LLAP cluster profile. Support json-file and yaml-file.

--loadbased-config-max-nodes --loadbased-max-nodes

User needs to set the maximum number of nodes for load based scaling, the load based scaling will use this to scale up and scale down between minimum and maximum number of nodes.

--loadbased-config-min-nodes --loadbased-min-nodes

User needs to set the minimum number of nodes for load based scaling, the load based scaling will use this to scale up and scale down between minimum and maximum number of nodes.

--loadbased-config-poll-interval --loadbased-interval

User can specify the poll interval, this is the time period (in seconds) after which scaling metrics are polled for triggering a scaling operation.

--loadbased-config-scaling-rules --loadbased-rules

The scaling rules. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.

--no-wait

Do not wait for the long-running operation to finish.

Accepted values: 0, 1, f, false, n, no, t, true, y, yes
--nodes

The nodes definitions. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.

--num-replicas

The number of task managers.

--oss-version

Version with three part.

--ranger-plugin-profile

Cluster Ranger plugin profile. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.

--ranger-profile

The ranger cluster profile. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.

--remove

Remove a property or an element from a list. Example: --remove property.list <indexToRemove> OR --remove propertyToRemove.

--resource-group -g

Name of resource group. You can configure the default group using az configure --defaults group=<name>.

--schedule-based-config-default-count --schedule-default-count

Setting default node count of current schedule configuration. Default node count specifies the number of nodes which are default when an specified scaling operation is executed (scale up/scale down).

--schedule-based-config-schedule --schedule-schedules

This specifies the schedules where scheduled based Autoscale to be enabled, the user has a choice to set multiple rules within the schedule across days and times (start/end). Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.

--schedule-based-config-time-zone --schedule-time-zone

User has to specify the timezone on which the schedule has to be set for schedule based autoscale configuration.

--script-action-profiles

The script action profile list. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.

--secret-reference

Properties of Key Vault secret. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.

--service-configs --service-configs-profiles

The service configs profiles. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.

--set

Update an object by specifying a property path and value to set. Example: --set property1.property2=<value>.

--spark-hive-catalog-db-name --spark-hive-db-name

The database name.

--spark-hive-catalog-db-password-secret --spark-hive-db-secret

The secret name which contains the database user password.

--spark-hive-catalog-db-server-name --spark-hive-db-server

The database server host.

--spark-hive-catalog-db-user-name --spark-hive-db-user

The database user name.

--spark-hive-catalog-key-vault-id --spark-hive-kv-id

The key vault resource id.

--spark-hive-catalog-thrift-url --spark-hive-thrift-url

The thrift url.

--spark-storage-url

The default storage URL.

--ssh-profile-count

Number of ssh pods per cluster.

--stub-profile

Stub cluster profile. Support json-file and yaml-file.

--subscription

Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID.

--tags

Resource tags. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.

--task-manager-cpu

Task manager CPU count.

--task-manager-memory

The task manager memory size.

--trino-hive-catalog

Hive catalog options. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.

--trino-plugins-spec --trino-profile-user-plugins-plugin-spec

Trino user plugins spec Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.

--trino-profile-user-plugins-telemetry-spec --trino-telemetry-spec

Trino user telemetry spec. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.

--user-plugins-spec

Spark user plugins spec Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.

--vm-size

The virtual machine SKU.

--worker-debug-port

The debug port. Default: 8008.

--worker-debug-suspend

The flag that if trino cluster suspend debug or not. Default: false.

Accepted values: 0, 1, f, false, n, no, t, true, y, yes
Global Parameters
--debug

Increase logging verbosity to show all debug logs.

--help -h

Show this help message and exit.

--only-show-errors

Only show errors, suppressing warnings.

--output -o

Output format.

Accepted values: json, jsonc, none, table, tsv, yaml, yamlc
Default value: json
--query

JMESPath query string. See http://jmespath.org/ for more information and examples.

--subscription

Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID.

--verbose

Increase logging verbosity. Use --debug for full debug logs.

az hdinsight-on-aks cluster wait

Preview

Command group 'az hdinsight-on-aks cluster' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus

Place the CLI in a waiting state until a condition is met.

az hdinsight-on-aks cluster wait [--cluster-name]
                                 [--cluster-pool-name]
                                 [--created]
                                 [--custom]
                                 [--deleted]
                                 [--exists]
                                 [--ids]
                                 [--interval]
                                 [--resource-group]
                                 [--subscription]
                                 [--timeout]
                                 [--updated]

Optional Parameters

--cluster-name --name -n

The name of the HDInsight cluster.

--cluster-pool-name

The name of the cluster pool.

--created

Wait until created with 'provisioningState' at 'Succeeded'.

Default value: False
--custom

Wait until the condition satisfies a custom JMESPath query. E.g. provisioningState!='InProgress', instanceView.statuses[?code=='PowerState/running'].

--deleted

Wait until deleted.

Default value: False
--exists

Wait until the resource exists.

Default value: False
--ids

One or more resource IDs (space-delimited). It should be a complete resource ID containing all information of 'Resource Id' arguments. You should provide either --ids or other 'Resource Id' arguments.

--interval

Polling interval in seconds.

Default value: 30
--resource-group -g

Name of resource group. You can configure the default group using az configure --defaults group=<name>.

--subscription

Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID.

--timeout

Maximum wait in seconds.

Default value: 3600
--updated

Wait until updated with provisioningState at 'Succeeded'.

Default value: False
Global Parameters
--debug

Increase logging verbosity to show all debug logs.

--help -h

Show this help message and exit.

--only-show-errors

Only show errors, suppressing warnings.

--output -o

Output format.

Accepted values: json, jsonc, none, table, tsv, yaml, yamlc
Default value: json
--query

JMESPath query string. See http://jmespath.org/ for more information and examples.

--subscription

Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID.

--verbose

Increase logging verbosity. Use --debug for full debug logs.