Azure libraries for Python usage patterns

The Azure SDK for Python is composed of many independent libraries, which are listed on the Python SDK package index.

All the libraries share certain common characteristics and usage patterns, such as installation and the use of inline JSON for object arguments.

Set up your local development environment

If you haven't already, you can set up an environment where you can run this code. Here are some options:

Library installation

To install a specific library package, use pip install:

# Install the management library for Azure Storage
pip install azure-mgmt-storage
# Install the client library for Azure Blob Storage
pip install azure-storage-blob

pip install retrieves the latest version of a library in your current Python environment.

You can also use pip to uninstall libraries and install specific versions, including preview versions. For more information, see How to install Azure library packages for Python.

Asynchronous operations

Asynchronous libraries

Many client and management libraries provide async versions (.aio). The asyncio library has been available since Python 3.4, and the async/await keywords were introduced in Python 3.5. The async versions of the libraries are intended to be used with Python 3.5 and later.

Examples of Azure Python SDK libraries with async versions include: azure.storage.blob.aio, azure.servicebus.aio, azure.mgmt.keyvault.aio, and azure.mgmt.compute.aio.

These libraries need an async transport such as aiohttp to work. The azure-core library provides an async transport, AioHttpTransport, which is used by the async libraries.

The following code shows how to create a client for the async version of the Azure Blob Storage library:

credential = DefaultAzureCredential()

async def run():

    async with BlobClient(
        storage_url,
        container_name="blob-container-01",
        blob_name=f"sample-blob-{str(uuid.uuid4())[0:5]}.txt",
        credential=credential,
    ) as blob_client:

        # Open a local file and upload its contents to Blob Storage
        with open("./sample-source.txt", "rb") as data:
            await blob_client.upload_blob(data)
            print(f"Uploaded sample-source.txt to {blob_client.url}")

        # Close credential
        await credential.close()

asyncio.run(run())

The full example is on GitHub at use_blob_auth_async.py. For the synchronous version of this code, see Example: Upload a blob.

Long running operations

Some management operations that you invoke (such as ComputeManagementClient.virtual_machines.begin_create_or_update and WebSiteManagementClient.web_apps.begin_create_or_update return a poller for long running operations, LROPoller[<type>], where <type> is specific to the operation in question.

Note

You may notice differences in method names in a library, which is due to version differences. Older libraries that aren't based on azure.core typically use names like create_or_update. Libraries based on azure.core add the begin_ prefix to method names to better indicate that they are long polling operations. Migrating old code to a newer azure.core-based library typically means adding the begin_ prefix to method names, as most method signatures remain the same.

The LROPoller return type means that the operation is asynchronous. Accordingly, you must call that poller's result method to wait for the operation to finish and obtain its result.

The following code, taken from Example: Create and deploy a web app, shows an example of using the poller to wait for a result:

poller = app_service_client.web_apps.begin_create_or_update(RESOURCE_GROUP_NAME,
    WEB_APP_NAME,
    {
        "location": LOCATION,
        "server_farm_id": plan_result.id,
        "site_config": {
            "linux_fx_version": "python|3.8"
        }
    }
)

web_app_result = poller.result()

In this case, the return value of begin_create_or_update is of type AzureOperationPoller[Site], which means that the return value of poller.result() is a Site object.

Exceptions

In general, the Azure libraries raise exceptions when operations fail to perform as intended, including failed HTTP requests to the Azure REST API. For app code, you can use try...except blocks around library operations.

For more information on the type of exceptions that may be raised, see the documentation for the operation in question.

Logging

The most recent Azure libraries use the Python standard logging library to generate log output. You can set the logging level for individual libraries, groups of libraries, or all libraries. Once you register a logging stream handler, you can then enable logging for a specific client object or a specific operation. For more information, see Logging in the Azure libraries.

Proxy configuration

To specify a proxy, you can use environment variables or optional arguments. For more information, see How to configure proxies.

Optional arguments for client objects and methods

In the library reference documentation, you often see a **kwargs or **operation_config argument in the signature of a client object constructor or a specific operation method. These placeholders indicate that the object or method in question may support other named arguments. Typically, the reference documentation indicates the specific arguments you can use. There are also some general arguments that are often supported as described in the following sections.

Arguments for libraries based on azure.core

These arguments apply to those libraries listed on Python - New Libraries. For example, here are a subset of the keyword arguments for azure-core. For a complete list, see the GitHub README for azure-core.

Name Type Default Description
logging_enable bool False Enables logging. For more information, see Logging in the Azure libraries.
proxies dict {} Proxy server URLs. For more information, see How to configure proxies.
use_env_settings bool True If True, allows use of HTTP_PROXY and HTTPS_PROXY environment variables for proxies. If False, the environment variables are ignored. For more information, see How to configure proxies.
connection_timeout int 300 The timeout in seconds for making a connection to Azure REST API endpoints.
read_timeout int 300 The timeout in seconds for completing an Azure REST API operation (that is, waiting for a response).
retry_total int 10 The number of allowable retry attempts for REST API calls. Use retry_total=0 to disable retries.
retry_mode enum exponential Applies retry timing in a linear or exponential manner. If 'single', retries are made at regular intervals. If 'exponential', each retry waits twice as long as the previous retry.

Individual libraries aren't obligated to support any of these arguments, so always consult the reference documentation for each library for exact details. Also, each library may support other arguments. For example, for blob storage specific keyword arguments, see the GitHub README for azure-storage-blob.

Inline JSON pattern for object arguments

Many operations within the Azure libraries allow you to express object arguments either as discrete objects or as inline JSON.

For example, suppose you have a ResourceManagementClient object through which you create a resource group with its create_or_update method. The second argument to this method is of type ResourceGroup.

To call the create_or_update method, you can create a discrete instance of ResourceGroup directly with its required arguments (location in this case):

# Provision the resource group.
rg_result = resource_client.resource_groups.create_or_update(
    "PythonSDKExample-rg",
    ResourceGroup(location="centralus")
)

Alternately, you can pass the same parameters as inline JSON:

# Provision the resource group.
rg_result = resource_client.resource_groups.create_or_update(
    "PythonAzureExample-rg", {"location": "centralus"}
)

When you use inline JSON, the Azure libraries automatically convert the inline JSON to the appropriate object type for the argument in question.

Objects can also have nested object arguments, in which case you can also use nested JSON.

For example, suppose you have an instance of the KeyVaultManagementClient object, and are calling its create_or_update method. In this case, the third argument is of type VaultCreateOrUpdateParameters, which itself contains an argument of type VaultProperties. VaultProperties, in turn, contains object arguments of type Sku and list[AccessPolicyEntry]. A Sku contains a SkuName object, and each AccessPolicyEntry contains a Permissions object.

To call begin_create_or_update with embedded objects, you use code like the following (assuming tenant_id, object_id, and LOCATION are already defined). You can also create the necessary objects before the function call.

# Provision a Key Vault using inline parameters
poller = keyvault_client.vaults.begin_create_or_update(
    RESOURCE_GROUP_NAME,
    KEY_VAULT_NAME_A,
    VaultCreateOrUpdateParameters(
        location = LOCATION,
        properties = VaultProperties(
            tenant_id = tenant_id,
            sku = Sku(
                name="standard",
                family="A"
            ),            
            access_policies = [
                AccessPolicyEntry(
                    tenant_id = tenant_id,
                    object_id = object_id,
                    permissions = Permissions(
                        keys = ['all'],
                        secrets = ['all']
                    )
                )
            ]
        )
    )
)

key_vault1 = poller.result()

The same call using inline JSON appears as follows:

# Provision a Key Vault using inline JSON
poller = keyvault_client.vaults.begin_create_or_update(
    RESOURCE_GROUP_NAME,
    KEY_VAULT_NAME_B,
    {
        'location': LOCATION,
        'properties': {
            'sku': {
                'name': 'standard',
                'family': 'A'
            },
            'tenant_id': tenant_id,
            'access_policies': [{
                'tenant_id': tenant_id,
                'object_id': object_id,                
                'permissions': {
                    'keys': ['all'],
                    'secrets': ['all']
                }
            }]
        }
    }
)

key_vault2 = poller.result()

Because both forms are equivalent, you can choose whichever you prefer and even intermix them. (The full code for these examples can be found on GitHub.)

If your JSON isn't formed properly, you typically get the error, "DeserializationError: Unable to deserialize to object: type, AttributeError: 'str' object has no attribute 'get'". A common cause of this error is that you're providing a single string for a property when the library expects a nested JSON object. For example, using 'sku': 'standard' in the previous example generates this error because the sku parameter is a Sku object that expects inline object JSON, in this case {'name': 'standard'}, which maps to the expected SkuName type.

Next steps

Now that you understand the common patterns for using the Azure libraries for Python, see the following standalone examples to explore specific management and client library scenarios. You can try these examples in any order as they're not sequential or interdependent.