az hdinsight-on-aks cluster
Note
This reference is part of the hdinsightonaks extension for the Azure CLI (version 2.57.0 or higher). The extension will automatically install the first time you run an az hdinsight-on-aks cluster command. Learn more about extensions.
This command group is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus
Cluster operations.
Commands
Name | Description | Type | Status |
---|---|---|---|
az hdinsight-on-aks cluster create |
Create a cluster. |
Extension | Preview |
az hdinsight-on-aks cluster delete |
Delete a cluster. |
Extension | Preview |
az hdinsight-on-aks cluster instance-view |
Gets the status of a cluster instances. |
Extension | Preview |
az hdinsight-on-aks cluster instance-view list |
List the lists of instance views. |
Extension | Preview |
az hdinsight-on-aks cluster instance-view show |
Get the status of a cluster instance. |
Extension | Preview |
az hdinsight-on-aks cluster job |
Cluster job operations. |
Extension | Preview |
az hdinsight-on-aks cluster job list |
List jobs of HDInsight on AKS cluster. |
Extension | Preview |
az hdinsight-on-aks cluster job run |
Operations on jobs of HDInsight on AKS cluster. |
Extension | Preview |
az hdinsight-on-aks cluster library |
Manage the library of the cluster. |
Extension | Preview |
az hdinsight-on-aks cluster library list |
List all libraries of HDInsight on AKS cluster. |
Extension | Preview |
az hdinsight-on-aks cluster library manage |
Library management operations on HDInsight on AKS cluster. |
Extension | Preview |
az hdinsight-on-aks cluster list |
List the HDInsight cluster pools under a resource group. |
Extension | Preview |
az hdinsight-on-aks cluster list-service-config |
List the config dump of all services running in cluster. |
Extension | Preview |
az hdinsight-on-aks cluster node-profile |
Manage compute node profile. |
Extension | Preview |
az hdinsight-on-aks cluster node-profile create |
Create a node profile with SKU and worker count. |
Extension | Preview |
az hdinsight-on-aks cluster resize |
Resize an existing Cluster. |
Extension | Preview |
az hdinsight-on-aks cluster show |
Get a HDInsight cluster. |
Extension | Preview |
az hdinsight-on-aks cluster update |
Update a cluster. |
Extension | Preview |
az hdinsight-on-aks cluster upgrade |
Upgrade cluster. |
Extension | Preview |
az hdinsight-on-aks cluster upgrade history |
List a list of upgrade history. |
Extension | Preview |
az hdinsight-on-aks cluster upgrade list |
List a cluster available upgrades. |
Extension | Preview |
az hdinsight-on-aks cluster upgrade rollback |
Manual rollback upgrade for a cluster. |
Extension | Preview |
az hdinsight-on-aks cluster upgrade run |
Upgrade a cluster. |
Extension | Preview |
az hdinsight-on-aks cluster wait |
Place the CLI in a waiting state until a condition is met. |
Extension | Preview |
az hdinsight-on-aks cluster create
Command group 'az hdinsight-on-aks cluster' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus
Create a cluster.
az hdinsight-on-aks cluster create --cluster-name
--cluster-pool-name
--resource-group
[--application-log-std-error-enabled {0, 1, f, false, n, no, t, true, y, yes}]
[--application-log-std-out-enabled {0, 1, f, false, n, no, t, true, y, yes}]
[--assigned-identity-client-id]
[--assigned-identity-id]
[--assigned-identity-object-id]
[--authorization-group-id]
[--authorization-user-id]
[--autoscale-profile-graceful-decommission-timeout]
[--autoscale-profile-type {LoadBased, ScheduleBased}]
[--availability-zones]
[--cluster-type]
[--cluster-version]
[--cooldown-period]
[--coord-debug-port]
[--coord-debug-suspend {0, 1, f, false, n, no, t, true, y, yes}]
[--coordinator-debug-enabled {0, 1, f, false, n, no, t, true, y, yes}]
[--coordinator-high-availability-enabled {0, 1, f, false, n, no, t, true, y, yes}]
[--db-connection-authentication-mode {IdentityAuth, SqlAuth}]
[--deployment-mode {Application, Session}]
[--enable-autoscale {0, 1, f, false, n, no, t, true, y, yes}]
[--enable-la-metrics {0, 1, f, false, n, no, t, true, y, yes}]
[--enable-log-analytics {0, 1, f, false, n, no, t, true, y, yes}]
[--enable-prometheu {0, 1, f, false, n, no, t, true, y, yes}]
[--enable-worker-debug {0, 1, f, false, n, no, t, true, y, yes}]
[--flink-db-auth-mode {IdentityAuth, SqlAuth}]
[--flink-hive-catalog-db-connection-password-secret]
[--flink-hive-catalog-db-connection-url]
[--flink-hive-catalog-db-connection-user-name]
[--flink-storage-key]
[--flink-storage-uri]
[--history-server-cpu]
[--history-server-memory]
[--identity-list]
[--internal-ingress {0, 1, f, false, n, no, t, true, y, yes}]
[--job-manager-cpu]
[--job-manager-memory]
[--job-spec]
[--kafka-profile]
[--key-vault-id]
[--llap-profile]
[--loadbased-config-max-nodes]
[--loadbased-config-min-nodes]
[--loadbased-config-poll-interval]
[--loadbased-config-scaling-rules]
[--location]
[--no-wait {0, 1, f, false, n, no, t, true, y, yes}]
[--nodes]
[--num-replicas]
[--oss-version]
[--ranger-plugin-profile]
[--ranger-profile]
[--schedule-based-config-default-count]
[--schedule-based-config-schedule]
[--schedule-based-config-time-zone]
[--script-action-profiles]
[--secret-reference]
[--service-configs]
[--spark-hive-catalog-db-name]
[--spark-hive-catalog-db-password-secret]
[--spark-hive-catalog-db-server-name]
[--spark-hive-catalog-db-user-name]
[--spark-hive-catalog-key-vault-id]
[--spark-hive-catalog-thrift-url]
[--spark-storage-url]
[--ssh-profile-count]
[--stub-profile]
[--tags]
[--task-manager-cpu]
[--task-manager-memory]
[--trino-hive-catalog]
[--trino-plugins-spec]
[--trino-profile-user-plugins-telemetry-spec]
[--user-plugins-spec]
[--vm-size]
[--worker-debug-port]
[--worker-debug-suspend {0, 1, f, false, n, no, t, true, y, yes}]
Examples
Create a simple Trino cluster.
az az hdinsight-on-aks cluster create -n {clustername} --cluster-pool-name {clusterpoolname} -g {resourcesGroup} -l {location}--cluster-type trino --cluster-version {1.2.0} --oss-version {0.440.0} --node '[{"count":2,"type":"worker","vm-size":"Standard_D8d_v5"}]' --identity-list '[{"client-id":"00000000-0000-0000-0000-000000000000","object-id":"00000000-0000-0000-0000-000000000000","resource-id":"/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resourcesGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/yourmsi","type":"cluster"}]' --authorization-user-id "00000000-0000-0000-0000-000000000000"
Create a simple Flink cluster.
az hdinsight-on-aks cluster create -n {clustername} --cluster-pool-name {clusterpoolname} -g {resourcesGroup} -l {location}--cluster-type flink --flink-storage-uri {abfs://container@yourstorage.dfs.core.windows.net/} --cluster-version {1.2.0} --oss-version {1.17.0} --node '[{"count":5,"type":"worker","vm-size":"Standard_D8d_v5"}]' --identity-list '[{"client-id":"00000000-0000-0000-0000-000000000000","object-id":"00000000-0000-0000-0000-000000000000","resource-id":"/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resourcesGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/yourmsi","type":"cluster"}]' --authorization-user-id "00000000-0000-0000-0000-000000000000" --job-manager-cpu {1} --job-manager-memory {2000} --task-manager-cpu {6} --task-manager-memory {49016}
Create a simple Spark cluster.
az hdinsight-on-aks cluster create -n {clustername} --cluster-pool-name {clusterpoolname} -g {resourcesGroup} -l {location}--cluster-type spark --spark-storage-url {abfs://container@yourstorage.dfs.core.windows.net/} --cluster-version {1.2.0} --oss-version {3.4.1} --node '[{"count":2,"type":"worker","vm-size":"Standard_D8d_v5"}]' --identity-list '[{"client-id":"00000000-0000-0000-0000-000000000000","object-id":"00000000-0000-0000-0000-000000000000","resource-id":"/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resourcesGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/yourmsi","type":"cluster"}]' --authorization-user-id "00000000-0000-0000-0000-000000000000"
Create a simple Kafka cluster.
az az hdinsight-on-aks cluster create -n {clustername} --cluster-pool-name {clusterpoolname} -g {resourcesGroup} -l {location}--cluster-type kafka --cluster-version {1.2.0} --oss-version {3.6.0} --node '[{"count":2,"type":"worker","vm-size":"Standard_D8d_v5"}]' --identity-list '[{"client-id":"00000000-0000-0000-0000-000000000000","object-id":"00000000-0000-0000-0000-000000000000","resource-id":"/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resourcesGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/yourmsi","type":"cluster"}]' --authorization-user-id "00000000-0000-0000-0000-000000000000" --kafka-profile '{"disk-storage":{"data-disk-size":8,"data-disk-type":"Standard_SSD_LRS"}}'
Create a Spark cluster with custom hive metastore.
az hdinsight-on-aks cluster create -n {clustername} --cluster-pool-name {clusterpoolname} -g {resourcesGroup} -l {location}--cluster-type spark --spark-storage-url {abfs://container@yourstorage.dfs.core.windows.net/} --cluster-version {1.2.0} --oss-version {3.4.1} --node '[{"count":2,"type":"worker","vm-size":"Standard_D8d_v5"}]' --identity-list '[{"client-id":"00000000-0000-0000-0000-000000000000","object-id":"00000000-0000-0000-0000-000000000000","resource-id":"/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resourcesGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/yourmsi","type":"cluster"}]' --authorization-user-id "00000000-0000-0000-0000-000000000000" --secret-reference '[{reference-name:sqlpassword,secret-name:sqlpassword,type:Secret}]' --key-vault-id /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resourcesGroup/providers/Microsoft.KeyVault/vaults/CLIKV --spark-hive-kv-id /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resourcesGroup/providers/Microsoft.KeyVault/vaults/CLIKV --spark-db-auth-mode SqlAuth --spark-hive-db-name {sparkhms} --spark-hive-db-secret {sqlpassword} --spark-hive-db-server {yourserver.database.windows.net} --spark-hive-db-user {username}
Create a Flink cluster with availability zones.
az hdinsight-on-aks cluster create -n {clustername} --cluster-pool-name {clusterpoolname} -g {resourcesGroup} -l {location}--cluster-type flink --flink-storage-uri {abfs://container@yourstorage.dfs.core.windows.net/} --cluster-version {1.2.0} --oss-version {1.17.0} --node '[{"count":5,"type":"worker","vm-size":"Standard_D8d_v5"}]' --identity-list '[{"client-id":"00000000-0000-0000-0000-000000000000","object-id":"00000000-0000-0000-0000-000000000000","resource-id":"/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resourcesGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/yourmsi","type":"cluster"}]' --authorization-user-id "00000000-0000-0000-0000-000000000000" --job-manager-cpu {1} --job-manager-memory {2000} --task-manager-cpu {6} --task-manager-memory {49016} --availability-zones [1,2]
Required Parameters
The name of the HDInsight cluster.
The name of the cluster pool.
Name of resource group. You can configure the default group using az configure --defaults group=<name>
.
Optional Parameters
True if application standard error is enabled, otherwise false.
True if application standard out is enabled, otherwise false.
ClientId of the MSI.
ResourceId of the MSI.
ObjectId of the MSI.
AAD group Ids authorized for data plane access. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.
AAD user Ids authorized for data plane access. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.
This property is for graceful decommission timeout; It has a default setting of 3600 seconds before forced shutdown takes place. This is the maximal time to wait for running containers and applications to complete before transition a DECOMMISSIONING node into DECOMMISSIONED. The default value is 3600 seconds. Negative value (like -1) is handled as infinite timeout.
User to specify which type of Autoscale to be implemented - Scheduled Based or Load Based.
The list of Availability zones to use for AKS VMSS nodes. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.
The type of cluster.
Version with 3/4 part.
This is a cool down period, this is a time period in seconds, which determines the amount of time that must elapse between a scaling activity started by a rule and the start of the next scaling activity, regardless of the rule that triggers it. The default value is 300 seconds.
The flag that if enable debug or not. Default: 8008.
The flag that if suspend debug or not. Default: false.
The flag that if enable coordinator HA, uses multiple coordinator replicas with auto failover, one per each head node. Default: false.
The flag that if enable coordinator HA, uses multiple coordinator replicas with auto failover, one per each head node. Default: false.
The authentication mode to connect to your Hive metastore database. More details: https://learn.microsoft.com/en-us/azure/azure-sql/database/logins-create-manage?view=azuresql#authentication-and-authorization.
A string property that indicates the deployment mode of Flink cluster. It can have one of the following enum values => Application, Session. Default value is Session.
This indicates whether auto scale is enabled on HDInsight on AKS cluster.
True if metrics are enabled, otherwise false.
True if log analytics is enabled for the cluster, otherwise false.
Enable Prometheus for cluster or not.
The flag that if trino cluster enable debug or not. Default: false.
The authentication mode to connect to your Hive metastore database. More details: https://learn.microsoft.com/en-us/azure/azure-sql/database/logins-create-manage?view=azuresql#authentication-and-authorization.
Secret reference name from secretsProfile.secrets containing password for database connection.
Connection string for hive metastore database.
User name for database connection.
Storage key is only required for wasb(s) storage.
Storage account uri which is used for savepoint and checkpoint state.
History server CPU count.
History server memory size.
The list of managed identity. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.
Whether to create cluster using private IP instead of public IP. This property must be set at create time.
Job manager CPU count.
Job manager memory size.
Job specifications for flink clusters in application deployment mode. The specification is immutable even if job properties are changed by calling the RunJob API, please use the ListJob API to get the latest job information. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.
Kafka cluster profile. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.
Name of the user Key Vault where all the cluster specific user secrets are stored.
LLAP cluster profile. Support json-file and yaml-file.
User needs to set the maximum number of nodes for load based scaling, the load based scaling will use this to scale up and scale down between minimum and maximum number of nodes.
User needs to set the minimum number of nodes for load based scaling, the load based scaling will use this to scale up and scale down between minimum and maximum number of nodes.
User can specify the poll interval, this is the time period (in seconds) after which scaling metrics are polled for triggering a scaling operation.
The scaling rules. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.
The geo-location where the resource lives When not specified, the location of the resource group will be used.
Do not wait for the long-running operation to finish.
The nodes definitions. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.
The number of task managers.
Version with three part.
Cluster Ranger plugin profile. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.
The ranger cluster profile. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.
Setting default node count of current schedule configuration. Default node count specifies the number of nodes which are default when an specified scaling operation is executed (scale up/scale down).
This specifies the schedules where scheduled based Autoscale to be enabled, the user has a choice to set multiple rules within the schedule across days and times (start/end). Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.
User has to specify the timezone on which the schedule has to be set for schedule based autoscale configuration.
The script action profile list. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.
Properties of Key Vault secret. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.
The service configs profiles. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.
The database name.
The secret name which contains the database user password.
The database server host.
The database user name.
The key vault resource id.
The thrift url.
The default storage URL.
Number of ssh pods per cluster.
Stub cluster profile. Support json-file and yaml-file.
Resource tags. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.
Task manager CPU count.
The task manager memory size.
Trino cluster hive catalog options. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.
Trino user plugins spec Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.
Trino user telemetry spec. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.
Spark user plugins spec Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.
The virtual machine SKU.
The debug port. Default: 8008.
The flag that if trino cluster suspend debug or not. Default: false.
Global Parameters
Increase logging verbosity to show all debug logs.
Show this help message and exit.
Only show errors, suppressing warnings.
Output format.
JMESPath query string. See http://jmespath.org/ for more information and examples.
Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID
.
Increase logging verbosity. Use --debug for full debug logs.
az hdinsight-on-aks cluster delete
Command group 'az hdinsight-on-aks cluster' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus
Delete a cluster.
az hdinsight-on-aks cluster delete [--cluster-name]
[--cluster-pool-name]
[--ids]
[--no-wait {0, 1, f, false, n, no, t, true, y, yes}]
[--resource-group]
[--subscription]
[--yes]
Examples
Delete a cluster.
az hdinsight-on-aks cluster delete -n {clusterName} --cluster-pool-name {poolName} -g {RG}
Optional Parameters
The name of the HDInsight cluster.
The name of the cluster pool.
One or more resource IDs (space-delimited). It should be a complete resource ID containing all information of 'Resource Id' arguments. You should provide either --ids or other 'Resource Id' arguments.
Do not wait for the long-running operation to finish.
Name of resource group. You can configure the default group using az configure --defaults group=<name>
.
Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID
.
Do not prompt for confirmation.
Global Parameters
Increase logging verbosity to show all debug logs.
Show this help message and exit.
Only show errors, suppressing warnings.
Output format.
JMESPath query string. See http://jmespath.org/ for more information and examples.
Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID
.
Increase logging verbosity. Use --debug for full debug logs.
az hdinsight-on-aks cluster list
Command group 'az hdinsight-on-aks cluster' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus
List the HDInsight cluster pools under a resource group.
az hdinsight-on-aks cluster list --cluster-pool-name
--resource-group
[--max-items]
[--next-token]
Examples
List all cluster in a cluster pool.
az hdinsight-on-aks cluster list --cluster-pool-name {poolName}-g {RG}
Required Parameters
The name of the cluster pool.
Name of resource group. You can configure the default group using az configure --defaults group=<name>
.
Optional Parameters
Total number of items to return in the command's output. If the total number of items available is more than the value specified, a token is provided in the command's output. To resume pagination, provide the token value in --next-token
argument of a subsequent command.
Token to specify where to start paginating. This is the token value from a previously truncated response.
Global Parameters
Increase logging verbosity to show all debug logs.
Show this help message and exit.
Only show errors, suppressing warnings.
Output format.
JMESPath query string. See http://jmespath.org/ for more information and examples.
Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID
.
Increase logging verbosity. Use --debug for full debug logs.
az hdinsight-on-aks cluster list-service-config
Command group 'az hdinsight-on-aks cluster' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus
List the config dump of all services running in cluster.
az hdinsight-on-aks cluster list-service-config --cluster-name
--cluster-pool-name
--resource-group
[--max-items]
[--next-token]
Examples
Lists the config dump of all services running in cluster.
az hdinsight-on-aks cluster list-service-config --cluster-name {clusterName} --cluster-pool-name {poolName}-g {RG}
Required Parameters
The name of the HDInsight cluster.
The name of the cluster pool.
Name of resource group. You can configure the default group using az configure --defaults group=<name>
.
Optional Parameters
Total number of items to return in the command's output. If the total number of items available is more than the value specified, a token is provided in the command's output. To resume pagination, provide the token value in --next-token
argument of a subsequent command.
Token to specify where to start paginating. This is the token value from a previously truncated response.
Global Parameters
Increase logging verbosity to show all debug logs.
Show this help message and exit.
Only show errors, suppressing warnings.
Output format.
JMESPath query string. See http://jmespath.org/ for more information and examples.
Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID
.
Increase logging verbosity. Use --debug for full debug logs.
az hdinsight-on-aks cluster resize
Command group 'az hdinsight-on-aks cluster' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus
Resize an existing Cluster.
az hdinsight-on-aks cluster resize [--cluster-name]
[--cluster-pool-name]
[--ids]
[--location]
[--no-wait {0, 1, f, false, n, no, t, true, y, yes}]
[--resource-group]
[--subscription]
[--tags]
[--target-worker-node-count]
Examples
Resize a cluster.
az hdinsight-on-aks cluster resize --cluster-name {clusterName} --cluster-pool-name {poolName}-g {RG} -l {westus3} --target-worker-node-count {6}
Optional Parameters
The name of the HDInsight cluster.
The name of the cluster pool.
One or more resource IDs (space-delimited). It should be a complete resource ID containing all information of 'Resource Id' arguments. You should provide either --ids or other 'Resource Id' arguments.
The geo-location where the resource lives When not specified, the location of the resource group will be used.
Do not wait for the long-running operation to finish.
Name of resource group. You can configure the default group using az configure --defaults group=<name>
.
Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID
.
Resource tags. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.
Target node count of worker node.
Global Parameters
Increase logging verbosity to show all debug logs.
Show this help message and exit.
Only show errors, suppressing warnings.
Output format.
JMESPath query string. See http://jmespath.org/ for more information and examples.
Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID
.
Increase logging verbosity. Use --debug for full debug logs.
az hdinsight-on-aks cluster show
Command group 'az hdinsight-on-aks cluster' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus
Get a HDInsight cluster.
az hdinsight-on-aks cluster show [--cluster-name]
[--cluster-pool-name]
[--ids]
[--resource-group]
[--subscription]
Examples
Get a cluster with cluster name.
az hdinsight-on-aks cluster show -n {clusterName} --cluster-pool-name {poolName} -g {RG}
Optional Parameters
The name of the HDInsight cluster.
The name of the cluster pool.
One or more resource IDs (space-delimited). It should be a complete resource ID containing all information of 'Resource Id' arguments. You should provide either --ids or other 'Resource Id' arguments.
Name of resource group. You can configure the default group using az configure --defaults group=<name>
.
Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID
.
Global Parameters
Increase logging verbosity to show all debug logs.
Show this help message and exit.
Only show errors, suppressing warnings.
Output format.
JMESPath query string. See http://jmespath.org/ for more information and examples.
Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID
.
Increase logging verbosity. Use --debug for full debug logs.
az hdinsight-on-aks cluster update
Command group 'az hdinsight-on-aks cluster' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus
Update a cluster.
az hdinsight-on-aks cluster update [--add]
[--application-log-std-error-enabled {0, 1, f, false, n, no, t, true, y, yes}]
[--application-log-std-out-enabled {0, 1, f, false, n, no, t, true, y, yes}]
[--assigned-identity-client-id]
[--assigned-identity-id]
[--assigned-identity-object-id]
[--authorization-group-id]
[--authorization-user-id]
[--autoscale-profile-graceful-decommission-timeout]
[--autoscale-profile-type {LoadBased, ScheduleBased}]
[--availability-zones]
[--cluster-name]
[--cluster-pool-name]
[--cluster-version]
[--cooldown-period]
[--coord-debug-port]
[--coord-debug-suspend {0, 1, f, false, n, no, t, true, y, yes}]
[--coordinator-debug-enabled {0, 1, f, false, n, no, t, true, y, yes}]
[--coordinator-high-availability-enabled {0, 1, f, false, n, no, t, true, y, yes}]
[--db-connection-authentication-mode {IdentityAuth, SqlAuth}]
[--deployment-mode {Application, Session}]
[--enable-autoscale {0, 1, f, false, n, no, t, true, y, yes}]
[--enable-la-metrics {0, 1, f, false, n, no, t, true, y, yes}]
[--enable-log-analytics {0, 1, f, false, n, no, t, true, y, yes}]
[--enable-prometheu {0, 1, f, false, n, no, t, true, y, yes}]
[--enable-worker-debug {0, 1, f, false, n, no, t, true, y, yes}]
[--flink-db-auth-mode {IdentityAuth, SqlAuth}]
[--flink-hive-catalog-db-connection-password-secret]
[--flink-hive-catalog-db-connection-url]
[--flink-hive-catalog-db-connection-user-name]
[--flink-storage-key]
[--flink-storage-uri]
[--force-string {0, 1, f, false, n, no, t, true, y, yes}]
[--history-server-cpu]
[--history-server-memory]
[--identity-list]
[--ids]
[--job-manager-cpu]
[--job-manager-memory]
[--job-spec]
[--kafka-profile]
[--key-vault-id]
[--llap-profile]
[--loadbased-config-max-nodes]
[--loadbased-config-min-nodes]
[--loadbased-config-poll-interval]
[--loadbased-config-scaling-rules]
[--no-wait {0, 1, f, false, n, no, t, true, y, yes}]
[--nodes]
[--num-replicas]
[--oss-version]
[--ranger-plugin-profile]
[--ranger-profile]
[--remove]
[--resource-group]
[--schedule-based-config-default-count]
[--schedule-based-config-schedule]
[--schedule-based-config-time-zone]
[--script-action-profiles]
[--secret-reference]
[--service-configs]
[--set]
[--spark-hive-catalog-db-name]
[--spark-hive-catalog-db-password-secret]
[--spark-hive-catalog-db-server-name]
[--spark-hive-catalog-db-user-name]
[--spark-hive-catalog-key-vault-id]
[--spark-hive-catalog-thrift-url]
[--spark-storage-url]
[--ssh-profile-count]
[--stub-profile]
[--subscription]
[--tags]
[--task-manager-cpu]
[--task-manager-memory]
[--trino-hive-catalog]
[--trino-plugins-spec]
[--trino-profile-user-plugins-telemetry-spec]
[--user-plugins-spec]
[--vm-size]
[--worker-debug-port]
[--worker-debug-suspend {0, 1, f, false, n, no, t, true, y, yes}]
Examples
Update a cluster service-config.
az hdinsight-on-aks cluster update -n {clusterName} --cluster-pool-name {poolName} -g {RG} -service-configs {"[{service-name:yarn-service,configs:[{component:hadoop-config-client,files:[{file-name:yarn-site.xml,values:{yarn.nodemanager.resource.memory-mb:33333}}]}]}]"}
Optional Parameters
Add an object to a list of objects by specifying a path and key value pairs. Example: --add property.listProperty <key=value, string or JSON string>
.
True if application standard error is enabled, otherwise false.
True if application standard out is enabled, otherwise false.
ClientId of the MSI.
ResourceId of the MSI.
ObjectId of the MSI.
AAD group Ids authorized for data plane access. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.
AAD user Ids authorized for data plane access. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.
This property is for graceful decommission timeout; It has a default setting of 3600 seconds before forced shutdown takes place. This is the maximal time to wait for running containers and applications to complete before transition a DECOMMISSIONING node into DECOMMISSIONED. The default value is 3600 seconds. Negative value (like -1) is handled as infinite timeout.
User to specify which type of Autoscale to be implemented - Scheduled Based or Load Based.
The list of Availability zones to use for AKS VMSS nodes. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.
The name of the HDInsight cluster.
The name of the cluster pool.
Version with 3/4 part.
This is a cool down period, this is a time period in seconds, which determines the amount of time that must elapse between a scaling activity started by a rule and the start of the next scaling activity, regardless of the rule that triggers it. The default value is 300 seconds.
The flag that if enable debug or not. Default: 8008.
The flag that if suspend debug or not. Default: false.
The flag that if enable coordinator HA, uses multiple coordinator replicas with auto failover, one per each head node. Default: false.
The flag that if enable coordinator HA, uses multiple coordinator replicas with auto failover, one per each head node. Default: false.
The authentication mode to connect to your Hive metastore database. More details: https://learn.microsoft.com/en-us/azure/azure-sql/database/logins-create-manage?view=azuresql#authentication-and-authorization.
A string property that indicates the deployment mode of Flink cluster. It can have one of the following enum values => Application, Session. Default value is Session.
This indicates whether auto scale is enabled on HDInsight on AKS cluster.
True if metrics are enabled, otherwise false.
True if log analytics is enabled for the cluster, otherwise false.
Enable Prometheus for cluster or not.
The flag that if trino cluster enable debug or not. Default: false.
The authentication mode to connect to your Hive metastore database. More details: https://learn.microsoft.com/en-us/azure/azure-sql/database/logins-create-manage?view=azuresql#authentication-and-authorization.
Secret reference name from secretsProfile.secrets containing password for database connection.
Connection string for hive metastore database.
User name for database connection.
Storage key is only required for wasb(s) storage.
Storage account uri which is used for savepoint and checkpoint state.
When using 'set' or 'add', preserve string literals instead of attempting to convert to JSON.
History server CPU count.
History server memory size.
The list of managed identity. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.
One or more resource IDs (space-delimited). It should be a complete resource ID containing all information of 'Resource Id' arguments. You should provide either --ids or other 'Resource Id' arguments.
Job manager CPU count.
Job manager memory size.
Job specifications for flink clusters in application deployment mode. The specification is immutable even if job properties are changed by calling the RunJob API, please use the ListJob API to get the latest job information. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.
Kafka cluster profile. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.
Name of the user Key Vault where all the cluster specific user secrets are stored.
LLAP cluster profile. Support json-file and yaml-file.
User needs to set the maximum number of nodes for load based scaling, the load based scaling will use this to scale up and scale down between minimum and maximum number of nodes.
User needs to set the minimum number of nodes for load based scaling, the load based scaling will use this to scale up and scale down between minimum and maximum number of nodes.
User can specify the poll interval, this is the time period (in seconds) after which scaling metrics are polled for triggering a scaling operation.
The scaling rules. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.
Do not wait for the long-running operation to finish.
The nodes definitions. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.
The number of task managers.
Version with three part.
Cluster Ranger plugin profile. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.
The ranger cluster profile. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.
Remove a property or an element from a list. Example: --remove property.list <indexToRemove>
OR --remove propertyToRemove
.
Name of resource group. You can configure the default group using az configure --defaults group=<name>
.
Setting default node count of current schedule configuration. Default node count specifies the number of nodes which are default when an specified scaling operation is executed (scale up/scale down).
This specifies the schedules where scheduled based Autoscale to be enabled, the user has a choice to set multiple rules within the schedule across days and times (start/end). Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.
User has to specify the timezone on which the schedule has to be set for schedule based autoscale configuration.
The script action profile list. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.
Properties of Key Vault secret. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.
The service configs profiles. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.
Update an object by specifying a property path and value to set. Example: --set property1.property2=<value>
.
The database name.
The secret name which contains the database user password.
The database server host.
The database user name.
The key vault resource id.
The thrift url.
The default storage URL.
Number of ssh pods per cluster.
Stub cluster profile. Support json-file and yaml-file.
Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID
.
Resource tags. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.
Task manager CPU count.
The task manager memory size.
Hive catalog options. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.
Trino user plugins spec Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.
Trino user telemetry spec. Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.
Spark user plugins spec Support shorthand-syntax, json-file and yaml-file. Try "??" to show more.
The virtual machine SKU.
The debug port. Default: 8008.
The flag that if trino cluster suspend debug or not. Default: false.
Global Parameters
Increase logging verbosity to show all debug logs.
Show this help message and exit.
Only show errors, suppressing warnings.
Output format.
JMESPath query string. See http://jmespath.org/ for more information and examples.
Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID
.
Increase logging verbosity. Use --debug for full debug logs.
az hdinsight-on-aks cluster wait
Command group 'az hdinsight-on-aks cluster' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus
Place the CLI in a waiting state until a condition is met.
az hdinsight-on-aks cluster wait [--cluster-name]
[--cluster-pool-name]
[--created]
[--custom]
[--deleted]
[--exists]
[--ids]
[--interval]
[--resource-group]
[--subscription]
[--timeout]
[--updated]
Optional Parameters
The name of the HDInsight cluster.
The name of the cluster pool.
Wait until created with 'provisioningState' at 'Succeeded'.
Wait until the condition satisfies a custom JMESPath query. E.g. provisioningState!='InProgress', instanceView.statuses[?code=='PowerState/running'].
Wait until deleted.
Wait until the resource exists.
One or more resource IDs (space-delimited). It should be a complete resource ID containing all information of 'Resource Id' arguments. You should provide either --ids or other 'Resource Id' arguments.
Polling interval in seconds.
Name of resource group. You can configure the default group using az configure --defaults group=<name>
.
Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID
.
Maximum wait in seconds.
Wait until updated with provisioningState at 'Succeeded'.
Global Parameters
Increase logging verbosity to show all debug logs.
Show this help message and exit.
Only show errors, suppressing warnings.
Output format.
JMESPath query string. See http://jmespath.org/ for more information and examples.
Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID
.
Increase logging verbosity. Use --debug for full debug logs.