ContainerClient Class
A client to interact with a specific container, although that container may not yet exist.
For operations relating to a specific blob within this container, a blob client can be retrieved using the get_blob_client function.
For more optional configuration, please click here.
- Inheritance
-
azure.storage.blob._shared.base_client.StorageAccountHostsMixinContainerClientazure.storage.blob._encryption.StorageEncryptionMixinContainerClient
Constructor
ContainerClient(account_url: str, container_name: str, credential: str | Dict[str, str] | AzureNamedKeyCredential | AzureSasCredential | TokenCredential | None = None, **kwargs: Any)
Parameters
- account_url
- str
The URI to the storage account. In order to create a client given the full URI to the container, use the from_container_url classmethod.
- credential
The credentials with which to authenticate. This is optional if the account URL already has a SAS token. The value can be a SAS token string, an instance of a AzureSasCredential or AzureNamedKeyCredential from azure.core.credentials, an account shared access key, or an instance of a TokenCredentials class from azure.identity. If the resource URI already contains a SAS token, this will be ignored in favor of an explicit credential
- except in the case of AzureSasCredential, where the conflicting SAS tokens will raise a ValueError. If using an instance of AzureNamedKeyCredential, "name" should be the storage account name, and "key" should be the storage account key.
- api_version
- str
The Storage API version to use for requests. Default value is the most recent service version that is compatible with the current SDK. Setting to an older version may result in reduced feature compatibility.
New in version 12.2.0.
- secondary_hostname
- str
The hostname of the secondary endpoint.
- max_block_size
- int
The maximum chunk size for uploading a block blob in chunks.
Defaults to 4*1024*1024
, or 4MB.
- max_single_put_size
- int
If the blob size is less than or equal max_single_put_size, then the blob will be
uploaded with only one http PUT request. If the blob size is larger than max_single_put_size,
the blob will be uploaded in chunks. Defaults to 64*1024*1024
, or 64MB.
- min_large_block_upload_threshold
- int
The minimum chunk size required to use the memory efficient
algorithm when uploading a block blob. Defaults to 4*1024*1024
+1.
- use_byte_buffer
- bool
Use a byte buffer for block blob uploads. Defaults to False.
- max_page_size
- int
The maximum chunk size for uploading a page blob. Defaults to 4*1024*1024
, or 4MB.
- max_single_get_size
- int
The maximum size for a blob to be downloaded in a single call,
the exceeded part will be downloaded in chunks (could be parallel). Defaults to 32*1024*1024
, or 32MB.
- max_chunk_get_size
- int
The maximum chunk size used for downloading a blob. Defaults to 4*1024*1024
,
or 4MB.
Examples
Get a ContainerClient from an existing BlobServiceClient.
# Instantiate a BlobServiceClient using a connection string
from azure.storage.blob import BlobServiceClient
blob_service_client = BlobServiceClient.from_connection_string(self.connection_string)
# Instantiate a ContainerClient
container_client = blob_service_client.get_container_client("mynewcontainer")
Creating the container client directly.
from azure.storage.blob import ContainerClient
sas_url = "https://account.blob.core.windows.net/mycontainer?sv=2015-04-05&st=2015-04-29T22%3A18%3A26Z&se=2015-04-30T02%3A23%3A26Z&sr=b&sp=rw&sip=168.1.5.60-168.1.5.70&spr=https&sig=Z%2FRHIX5Xcg0Mq2rqI3OlWTjEg2tYkboXr1P9ZUXDtkk%3D"
container = ContainerClient.from_container_url(sas_url)
Methods
acquire_lease |
Requests a new lease. If the container does not have an active lease, the Blob service creates a lease on the container and returns a new lease ID. |
create_container |
Creates a new container under the specified account. If the container with the same name already exists, the operation fails. |
delete_blob |
Marks the specified blob or snapshot for deletion. The blob is later deleted during garbage collection. Note that in order to delete a blob, you must delete all of its snapshots. You can delete both at the same time with the delete_blob operation. If a delete retention policy is enabled for the service, then this operation soft deletes the blob or snapshot and retains the blob or snapshot for specified number of days. After specified number of days, blob's data is removed from the service during garbage collection. Soft deleted blob or snapshot is accessible through list_blobs specifying include=["deleted"] option. Soft-deleted blob or snapshot can be restored using <xref:azure.storage.blob.BlobClient.undelete> |
delete_blobs |
Marks the specified blobs or snapshots for deletion. The blobs are later deleted during garbage collection. Note that in order to delete blobs, you must delete all of their snapshots. You can delete both at the same time with the delete_blobs operation. If a delete retention policy is enabled for the service, then this operation soft deletes the blobs or snapshots and retains the blobs or snapshots for specified number of days. After specified number of days, blobs' data is removed from the service during garbage collection. Soft deleted blobs or snapshots are accessible through list_blobs specifying include=["deleted"] Soft-deleted blobs or snapshots can be restored using <xref:azure.storage.blob.BlobClient.undelete> The maximum number of blobs that can be deleted in a single request is 256. |
delete_container |
Marks the specified container for deletion. The container and any blobs contained within it are later deleted during garbage collection. |
download_blob |
Downloads a blob to the StorageStreamDownloader. The readall() method must be used to read all the content or readinto() must be used to download the blob into a stream. Using chunks() returns an iterator which allows the user to iterate over the content in chunks. |
exists |
Returns True if a container exists and returns False otherwise. |
find_blobs_by_tags |
Returns a generator to list the blobs under the specified container whose tags match the given search expression. The generator will lazily follow the continuation tokens returned by the service. |
from_connection_string |
Create ContainerClient from a Connection String. |
from_container_url |
Create ContainerClient from a container url. |
get_account_information |
Gets information related to the storage account. The information can also be retrieved if the user has a SAS to a container or blob. The keys in the returned dictionary include 'sku_name' and 'account_kind'. |
get_blob_client |
Get a client to interact with the specified blob. The blob need not already exist. |
get_container_access_policy |
Gets the permissions for the specified container. The permissions indicate whether container data may be accessed publicly. |
get_container_properties |
Returns all user-defined metadata and system properties for the specified container. The data returned does not include the container's list of blobs. |
list_blob_names |
Returns a generator to list the names of blobs under the specified container. The generator will lazily follow the continuation tokens returned by the service. Note that no additional properties or metadata will be returned when using this API. Additionally, this API does not have an option to include additional blobs such as snapshots, versions, soft-deleted blobs, etc. To get any of this data, use list_blobs. |
list_blobs |
Returns a generator to list the blobs under the specified container. The generator will lazily follow the continuation tokens returned by the service. |
set_container_access_policy |
Sets the permissions for the specified container or stored access policies that may be used with Shared Access Signatures. The permissions indicate whether blobs in a container may be accessed publicly. |
set_container_metadata |
Sets one or more user-defined name-value pairs for the specified container. Each call to this operation replaces all existing metadata attached to the container. To remove all metadata from the container, call this operation with no metadata dict. |
set_premium_page_blob_tier_blobs |
Sets the page blob tiers on all blobs. This API is only supported for page blobs on premium accounts. The maximum number of blobs that can be updated in a single request is 256. |
set_standard_blob_tier_blobs |
This operation sets the tier on block blobs. A block blob's tier determines Hot/Cool/Archive storage type. This operation does not update the blob's ETag. The maximum number of blobs that can be updated in a single request is 256. |
upload_blob |
Creates a new blob from a data source with automatic chunking. |
walk_blobs |
Returns a generator to list the blobs under the specified container. The generator will lazily follow the continuation tokens returned by the service. This operation will list blobs in accordance with a hierarchy, as delimited by the specified delimiter character. |
acquire_lease
Requests a new lease. If the container does not have an active lease, the Blob service creates a lease on the container and returns a new lease ID.
acquire_lease(lease_duration: int = -1, lease_id: str | None = None, **kwargs) -> BlobLeaseClient
Parameters
- lease_duration
- int
Specifies the duration of the lease, in seconds, or negative one (-1) for a lease that never expires. A non-infinite lease can be between 15 and 60 seconds. A lease duration cannot be changed using renew or change. Default is -1 (infinite lease).
- lease_id
- str
Proposed lease ID, in a GUID string format. The Blob service returns 400 (Invalid request) if the proposed lease ID is not in the correct format.
- if_modified_since
- datetime
A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has been modified since the specified time.
- if_unmodified_since
- datetime
A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has not been modified since the specified date/time.
- etag
- str
An ETag value, or the wildcard character (*). Used to check if the resource has changed, and act according to the condition specified by the match_condition parameter.
- match_condition
- MatchConditions
The match condition to use upon the etag.
- timeout
- int
Sets the server-side timeout for the operation in seconds. For more details see https://learn.microsoft.com/rest/api/storageservices/setting-timeouts-for-blob-service-operations. This value is not tracked or validated on the client. To configure client-side network timesouts see here.
Returns
A BlobLeaseClient object, that can be run in a context manager.
Return type
Examples
Acquiring a lease on the container.
# Acquire a lease on the container
lease = container_client.acquire_lease()
# Delete container by passing in the lease
container_client.delete_container(lease=lease)
create_container
Creates a new container under the specified account. If the container with the same name already exists, the operation fails.
create_container(metadata: Dict[str, str] | None = None, public_access: PublicAccess | str | None = None, **kwargs: Any) -> Dict[str, str | datetime]
Parameters
A dict with name_value pairs to associate with the container as metadata. Example:{'Category':'test'}
- container_encryption_scope
- dict or ContainerEncryptionScope
Specifies the default encryption scope to set on the container and use for all future writes.
New in version 12.2.0.
- timeout
- int
Sets the server-side timeout for the operation in seconds. For more details see https://learn.microsoft.com/rest/api/storageservices/setting-timeouts-for-blob-service-operations. This value is not tracked or validated on the client. To configure client-side network timesouts see here.
Returns
A dictionary of response headers.
Return type
Examples
Creating a container to store blobs.
container_client.create_container()
delete_blob
Marks the specified blob or snapshot for deletion.
The blob is later deleted during garbage collection. Note that in order to delete a blob, you must delete all of its snapshots. You can delete both at the same time with the delete_blob operation.
If a delete retention policy is enabled for the service, then this operation soft deletes the blob or snapshot and retains the blob or snapshot for specified number of days. After specified number of days, blob's data is removed from the service during garbage collection. Soft deleted blob or snapshot is accessible through list_blobs specifying include=["deleted"] option. Soft-deleted blob or snapshot can be restored using <xref:azure.storage.blob.BlobClient.undelete>
delete_blob(blob: str | BlobProperties, delete_snapshots: str | None = None, **kwargs) -> None
Parameters
- blob
- str or BlobProperties
The blob with which to interact. If specified, this value will override a blob value specified in the blob URL.
- delete_snapshots
- str
Required if the blob has associated snapshots. Values include:
"only": Deletes only the blobs snapshots.
"include": Deletes the blob along with all snapshots.
- version_id
- str
The version id parameter is an opaque DateTime value that, when present, specifies the version of the blob to delete.
New in version 12.4.0.
This keyword argument was introduced in API version '2019-12-12'.
- lease
- BlobLeaseClient or str
Required if the blob has an active lease. Value can be a BlobLeaseClient object or the lease ID as a string.
- if_modified_since
- datetime
A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has been modified since the specified time.
- if_unmodified_since
- datetime
A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has not been modified since the specified date/time.
- etag
- str
An ETag value, or the wildcard character (*). Used to check if the resource has changed, and act according to the condition specified by the match_condition parameter.
- match_condition
- MatchConditions
The match condition to use upon the etag.
- if_tags_match_condition
- str
Specify a SQL where clause on blob tags to operate only on blob with a matching value.
eg. "\"tagname\"='my tag'"
New in version 12.4.0.
- timeout
- int
Sets the server-side timeout for the operation in seconds. For more details see https://learn.microsoft.com/rest/api/storageservices/setting-timeouts-for-blob-service-operations. This value is not tracked or validated on the client. To configure client-side network timesouts see here.
Return type
delete_blobs
Marks the specified blobs or snapshots for deletion.
The blobs are later deleted during garbage collection. Note that in order to delete blobs, you must delete all of their snapshots. You can delete both at the same time with the delete_blobs operation.
If a delete retention policy is enabled for the service, then this operation soft deletes the blobs or snapshots and retains the blobs or snapshots for specified number of days. After specified number of days, blobs' data is removed from the service during garbage collection. Soft deleted blobs or snapshots are accessible through list_blobs specifying include=["deleted"] Soft-deleted blobs or snapshots can be restored using <xref:azure.storage.blob.BlobClient.undelete>
The maximum number of blobs that can be deleted in a single request is 256.
delete_blobs(*blobs: str | Dict[str, Any] | BlobProperties, **kwargs: Any) -> Iterator[HttpResponse]
Parameters
- blobs
- str or dict(str, <xref:Any>) or BlobProperties
The blobs to delete. This can be a single blob, or multiple values can be supplied, where each value is either the name of the blob (str) or BlobProperties.
Note
When the blob type is dict, here's a list of keys, value rules.
blob name:
key: 'name', value type: str
snapshot you want to delete:
key: 'snapshot', value type: str
version id:
key: 'version_id', value type: str
whether to delete snapshots when deleting blob:
key: 'delete_snapshots', value: 'include' or 'only'
if the blob modified or not:
key: 'if_modified_since', 'if_unmodified_since', value type: datetime
etag:
key: 'etag', value type: str
match the etag or not:
key: 'match_condition', value type: MatchConditions
tags match condition:
key: 'if_tags_match_condition', value type: str
lease:
key: 'lease_id', value type: Union[str, LeaseClient]
timeout for subrequest:
key: 'timeout', value type: int
- delete_snapshots
- str
Required if a blob has associated snapshots. Values include:
"only": Deletes only the blobs snapshots.
"include": Deletes the blob along with all snapshots.
- if_modified_since
- datetime
A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has been modified since the specified time.
- if_unmodified_since
- datetime
A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has not been modified since the specified date/time.
- if_tags_match_condition
- str
Specify a SQL where clause on blob tags to operate only on blob with a matching value.
eg. "\"tagname\"='my tag'"
New in version 12.4.0.
- raise_on_any_failure
- bool
This is a boolean param which defaults to True. When this is set, an exception is raised even if there is a single operation failure.
- timeout
- int
Sets the server-side timeout for the operation in seconds. For more details see https://learn.microsoft.com/rest/api/storageservices/setting-timeouts-for-blob-service-operations. This value is not tracked or validated on the client. To configure client-side network timesouts see here.
Returns
An iterator of responses, one for each blob in order
Return type
Examples
Deleting multiple blobs.
# Delete multiple blobs in the container by name
container_client.delete_blobs("my_blob1", "my_blob2")
# Delete multiple blobs by properties iterator
my_blobs = container_client.list_blobs(name_starts_with="my_blob")
container_client.delete_blobs(*my_blobs)
delete_container
Marks the specified container for deletion. The container and any blobs contained within it are later deleted during garbage collection.
delete_container(**kwargs: Any) -> None
Parameters
- lease
- BlobLeaseClient or str
If specified, delete_container only succeeds if the container's lease is active and matches this ID. Required if the container has an active lease.
- if_modified_since
- datetime
A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has been modified since the specified time.
- if_unmodified_since
- datetime
A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has not been modified since the specified date/time.
- etag
- str
An ETag value, or the wildcard character (*). Used to check if the resource has changed, and act according to the condition specified by the match_condition parameter.
- match_condition
- MatchConditions
The match condition to use upon the etag.
- timeout
- int
Sets the server-side timeout for the operation in seconds. For more details see https://learn.microsoft.com/rest/api/storageservices/setting-timeouts-for-blob-service-operations. This value is not tracked or validated on the client. To configure client-side network timesouts see here.
Return type
Examples
Delete a container.
container_client.delete_container()
download_blob
Downloads a blob to the StorageStreamDownloader. The readall() method must be used to read all the content or readinto() must be used to download the blob into a stream. Using chunks() returns an iterator which allows the user to iterate over the content in chunks.
download_blob(blob: str | BlobProperties, offset: int = None, length: int = None, *, encoding: str, **kwargs) -> StorageStreamDownloader[str]
Parameters
- blob
- str or BlobProperties
The blob with which to interact. If specified, this value will override a blob value specified in the blob URL.
- offset
- int
Start of byte range to use for downloading a section of the blob. Must be set if length is provided.
- length
- int
Number of bytes to read from the stream. This is optional, but should be supplied for optimal performance.
- version_id
- str
The version id parameter is an opaque DateTime value that, when present, specifies the version of the blob to download.
New in version 12.4.0.
This keyword argument was introduced in API version '2019-12-12'.
- validate_content
- bool
If true, calculates an MD5 hash for each chunk of the blob. The storage service checks the hash of the content that has arrived with the hash that was sent. This is primarily valuable for detecting bitflips on the wire if using http instead of https, as https (the default), will already validate. Note that this MD5 hash is not stored with the blob. Also note that if enabled, the memory-efficient upload algorithm will not be used because computing the MD5 hash requires buffering entire blocks, and doing so defeats the purpose of the memory-efficient algorithm.
- lease
- BlobLeaseClient or str
Required if the blob has an active lease. If specified, download_blob only succeeds if the blob's lease is active and matches this ID. Value can be a BlobLeaseClient object or the lease ID as a string.
- if_modified_since
- datetime
A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has been modified since the specified time.
- if_unmodified_since
- datetime
A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has not been modified since the specified date/time.
- etag
- str
An ETag value, or the wildcard character (*). Used to check if the resource has changed, and act according to the condition specified by the match_condition parameter.
- match_condition
- MatchConditions
The match condition to use upon the etag.
- if_tags_match_condition
- str
Specify a SQL where clause on blob tags to operate only on blob with a matching value.
eg. "\"tagname\"='my tag'"
New in version 12.4.0.
Encrypts the data on the service-side with the given key. Use of customer-provided keys must be done over HTTPS. As the encryption key itself is provided in the request, a secure connection must be established to transfer the key.
- max_concurrency
- int
The number of parallel connections with which to download.
- encoding
- str
Encoding to decode the downloaded bytes. Default is None, i.e. no decoding.
A callback to track the progress of a long running download. The signature is function(current: int, total: int) where current is the number of bytes transfered so far, and total is the total size of the download.
- timeout
- int
Sets the server-side timeout for the operation in seconds. For more details see https://learn.microsoft.com/rest/api/storageservices/setting-timeouts-for-blob-service-operations. This value is not tracked or validated on the client. To configure client-side network timesouts see here. This method may make multiple calls to the service and the timeout will apply to each call individually. multiple calls to the Azure service and the timeout will apply to each call individually.
Returns
A streaming object (StorageStreamDownloader)
Return type
exists
Returns True if a container exists and returns False otherwise.
exists(**kwargs: Any) -> bool
Parameters
- timeout
- int
Sets the server-side timeout for the operation in seconds. For more details see https://learn.microsoft.com/rest/api/storageservices/setting-timeouts-for-blob-service-operations. This value is not tracked or validated on the client. To configure client-side network timesouts see here.
Returns
boolean
find_blobs_by_tags
Returns a generator to list the blobs under the specified container whose tags match the given search expression. The generator will lazily follow the continuation tokens returned by the service.
find_blobs_by_tags(filter_expression: str, **kwargs: Any | None) -> ItemPaged[FilteredBlob]
Parameters
- filter_expression
- str
The expression to find blobs whose tags matches the specified condition. eg. ""yourtagname"='firsttag' and "yourtagname2"='secondtag'"
- results_per_page
- int
The max result per page when paginating.
- timeout
- int
Sets the server-side timeout for the operation in seconds. For more details see https://learn.microsoft.com/rest/api/storageservices/setting-timeouts-for-blob-service-operations. This value is not tracked or validated on the client. To configure client-side network timesouts see here.
Returns
An iterable (auto-paging) response of FilteredBlob.
Return type
from_connection_string
Create ContainerClient from a Connection String.
from_connection_string(conn_str: str, container_name: str, credential: str | Dict[str, str] | AzureNamedKeyCredential | AzureSasCredential | TokenCredential | None = None, **kwargs: Any) -> Self
Parameters
- credential
The credentials with which to authenticate. This is optional if the account URL already has a SAS token, or the connection string already has shared access key values. The value can be a SAS token string, an instance of a AzureSasCredential or AzureNamedKeyCredential from azure.core.credentials, an account shared access key, or an instance of a TokenCredentials class from azure.identity. Credentials provided here will take precedence over those in the connection string. If using an instance of AzureNamedKeyCredential, "name" should be the storage account name, and "key" should be the storage account key.
Returns
A container client.
Return type
Examples
Creating the ContainerClient from a connection string.
from azure.storage.blob import ContainerClient
container_client = ContainerClient.from_connection_string(
self.connection_string, container_name="mycontainer")
from_container_url
Create ContainerClient from a container url.
from_container_url(container_url: str, credential: str | Dict[str, str] | AzureNamedKeyCredential | AzureSasCredential | TokenCredential | None = None, **kwargs: Any) -> Self
Parameters
- container_url
- str
The full endpoint URL to the Container, including SAS token if used. This could be either the primary endpoint, or the secondary endpoint depending on the current location_mode.
- credential
The credentials with which to authenticate. This is optional if the account URL already has a SAS token, or the connection string already has shared access key values. The value can be a SAS token string, an instance of a AzureSasCredential or AzureNamedKeyCredential from azure.core.credentials, an account shared access key, or an instance of a TokenCredentials class from azure.identity. If the resource URI already contains a SAS token, this will be ignored in favor of an explicit credential
- except in the case of AzureSasCredential, where the conflicting SAS tokens will raise a ValueError. If using an instance of AzureNamedKeyCredential, "name" should be the storage account name, and "key" should be the storage account key.
Returns
A container client.
Return type
get_account_information
Gets information related to the storage account.
The information can also be retrieved if the user has a SAS to a container or blob. The keys in the returned dictionary include 'sku_name' and 'account_kind'.
get_account_information(**kwargs: Any) -> Dict[str, str]
Returns
A dict of account information (SKU and account type).
Return type
get_blob_client
Get a client to interact with the specified blob.
The blob need not already exist.
get_blob_client(blob: str | BlobProperties, snapshot: str = None) -> BlobClient
Parameters
- snapshot
- str
The optional blob snapshot on which to operate. This can be the snapshot ID string or the response returned from create_snapshot.
Returns
A BlobClient.
Return type
Examples
Get the blob client.
# Get the BlobClient from the ContainerClient to interact with a specific blob
blob_client = container_client.get_blob_client("mynewblob")
get_container_access_policy
Gets the permissions for the specified container. The permissions indicate whether container data may be accessed publicly.
get_container_access_policy(**kwargs: Any) -> Dict[str, Any]
Parameters
- lease
- BlobLeaseClient or str
If specified, get_container_access_policy only succeeds if the container's lease is active and matches this ID.
- timeout
- int
Sets the server-side timeout for the operation in seconds. For more details see https://learn.microsoft.com/rest/api/storageservices/setting-timeouts-for-blob-service-operations. This value is not tracked or validated on the client. To configure client-side network timesouts see here.
Returns
Access policy information in a dict.
Return type
Examples
Getting the access policy on the container.
policy = container_client.get_container_access_policy()
get_container_properties
Returns all user-defined metadata and system properties for the specified container. The data returned does not include the container's list of blobs.
get_container_properties(**kwargs: Any) -> ContainerProperties
Parameters
- lease
- BlobLeaseClient or str
If specified, get_container_properties only succeeds if the container's lease is active and matches this ID.
- timeout
- int
Sets the server-side timeout for the operation in seconds. For more details see https://learn.microsoft.com/rest/api/storageservices/setting-timeouts-for-blob-service-operations. This value is not tracked or validated on the client. To configure client-side network timesouts see here.
Returns
Properties for the specified container within a container object.
Return type
Examples
Getting properties on the container.
properties = container_client.get_container_properties()
list_blob_names
Returns a generator to list the names of blobs under the specified container. The generator will lazily follow the continuation tokens returned by the service.
Note that no additional properties or metadata will be returned when using this API. Additionally, this API does not have an option to include additional blobs such as snapshots, versions, soft-deleted blobs, etc. To get any of this data, use list_blobs.
list_blob_names(**kwargs: Any) -> ItemPaged[str]
Parameters
- name_starts_with
- str
Filters the results to return only blobs whose names begin with the specified prefix.
- timeout
- int
Sets the server-side timeout for the operation in seconds. For more details see https://learn.microsoft.com/rest/api/storageservices/setting-timeouts-for-blob-service-operations. This value is not tracked or validated on the client. To configure client-side network timesouts see here.
Returns
An iterable (auto-paging) response of blob names as strings.
Return type
list_blobs
Returns a generator to list the blobs under the specified container. The generator will lazily follow the continuation tokens returned by the service.
list_blobs(name_starts_with: str | None = None, include: str | List[str] | None = None, **kwargs: Any) -> ItemPaged[BlobProperties]
Parameters
- name_starts_with
- str
Filters the results to return only blobs whose names begin with the specified prefix.
Specifies one or more additional datasets to include in the response. Options include: 'snapshots', 'metadata', 'uncommittedblobs', 'copy', 'deleted', 'deletedwithversions', 'tags', 'versions', 'immutabilitypolicy', 'legalhold'.
- timeout
- int
Sets the server-side timeout for the operation in seconds. For more details see https://learn.microsoft.com/rest/api/storageservices/setting-timeouts-for-blob-service-operations. This value is not tracked or validated on the client. To configure client-side network timesouts see here.
Returns
An iterable (auto-paging) response of BlobProperties.
Return type
Examples
List the blobs in the container.
blobs_list = container_client.list_blobs()
for blob in blobs_list:
print(blob.name + '\n')
set_container_access_policy
Sets the permissions for the specified container or stored access policies that may be used with Shared Access Signatures. The permissions indicate whether blobs in a container may be accessed publicly.
set_container_access_policy(signed_identifiers: Dict[str, AccessPolicy], public_access: str | PublicAccess | None = None, **kwargs) -> Dict[str, str | datetime]
Parameters
- signed_identifiers
- dict[str, AccessPolicy]
A dictionary of access policies to associate with the container. The dictionary may contain up to 5 elements. An empty dictionary will clear the access policies set on the service.
- lease
- BlobLeaseClient or str
Required if the container has an active lease. Value can be a BlobLeaseClient object or the lease ID as a string.
- if_modified_since
- datetime
A datetime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has been modified since the specified date/time.
- if_unmodified_since
- datetime
A datetime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has not been modified since the specified date/time.
- timeout
- int
Sets the server-side timeout for the operation in seconds. For more details see https://learn.microsoft.com/rest/api/storageservices/setting-timeouts-for-blob-service-operations. This value is not tracked or validated on the client. To configure client-side network timesouts see here.
Returns
Container-updated property dict (Etag and last modified).
Return type
Examples
Setting access policy on the container.
# Create access policy
from azure.storage.blob import AccessPolicy, ContainerSasPermissions
access_policy = AccessPolicy(permission=ContainerSasPermissions(read=True),
expiry=datetime.utcnow() + timedelta(hours=1),
start=datetime.utcnow() - timedelta(minutes=1))
identifiers = {'test': access_policy}
# Set the access policy on the container
container_client.set_container_access_policy(signed_identifiers=identifiers)
set_container_metadata
Sets one or more user-defined name-value pairs for the specified container. Each call to this operation replaces all existing metadata attached to the container. To remove all metadata from the container, call this operation with no metadata dict.
set_container_metadata(metadata: Dict[str, str] | None = None, **kwargs) -> Dict[str, str | datetime]
Parameters
A dict containing name-value pairs to associate with the container as metadata. Example: {'category':'test'}
- lease
- BlobLeaseClient or str
If specified, set_container_metadata only succeeds if the container's lease is active and matches this ID.
- if_modified_since
- datetime
A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has been modified since the specified time.
- if_unmodified_since
- datetime
A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has not been modified since the specified date/time.
- etag
- str
An ETag value, or the wildcard character (*). Used to check if the resource has changed, and act according to the condition specified by the match_condition parameter.
- timeout
- int
Sets the server-side timeout for the operation in seconds. For more details see https://learn.microsoft.com/rest/api/storageservices/setting-timeouts-for-blob-service-operations. This value is not tracked or validated on the client. To configure client-side network timesouts see here.
Returns
Container-updated property dict (Etag and last modified).
Return type
Examples
Setting metadata on the container.
# Create key, value pairs for metadata
metadata = {'type': 'test'}
# Set metadata on the container
container_client.set_container_metadata(metadata=metadata)
set_premium_page_blob_tier_blobs
Sets the page blob tiers on all blobs. This API is only supported for page blobs on premium accounts.
The maximum number of blobs that can be updated in a single request is 256.
set_premium_page_blob_tier_blobs(premium_page_blob_tier: str | PremiumPageBlobTier | None, *blobs: str | Dict[str, Any] | BlobProperties, **kwargs: Any) -> Iterator[HttpResponse]
Parameters
- premium_page_blob_tier
- PremiumPageBlobTier
A page blob tier value to set the blob to. The tier correlates to the size of the blob and number of allowed IOPS. This is only applicable to page blobs on premium storage accounts.
Note
If you want to set different tier on different blobs please set this positional parameter to None.
Then the blob tier on every BlobProperties will be taken.
- blobs
- str or dict(str, <xref:Any>) or BlobProperties
The blobs with which to interact. This can be a single blob, or multiple values can be supplied, where each value is either the name of the blob (str) or BlobProperties.
Note
When the blob type is dict, here's a list of keys, value rules.
blob name:
key: 'name', value type: str
premium blob tier:
key: 'blob_tier', value type: PremiumPageBlobTier
lease:
key: 'lease_id', value type: Union[str, LeaseClient]
timeout for subrequest:
key: 'timeout', value type: int
- timeout
- int
Sets the server-side timeout for the operation in seconds. For more details see https://learn.microsoft.com/rest/api/storageservices/setting-timeouts-for-blob-service-operations. This value is not tracked or validated on the client. To configure client-side network timesouts see here.
- raise_on_any_failure
- bool
This is a boolean param which defaults to True. When this is set, an exception is raised even if there is a single operation failure.
Returns
An iterator of responses, one for each blob in order
Return type
set_standard_blob_tier_blobs
This operation sets the tier on block blobs.
A block blob's tier determines Hot/Cool/Archive storage type. This operation does not update the blob's ETag.
The maximum number of blobs that can be updated in a single request is 256.
set_standard_blob_tier_blobs(standard_blob_tier: str | StandardBlobTier | None, *blobs: str | Dict[str, Any] | BlobProperties, **kwargs: Any) -> Iterator[HttpResponse]
Parameters
- standard_blob_tier
- str or StandardBlobTier
Indicates the tier to be set on all blobs. Options include 'Hot', 'Cool', 'Archive'. The hot tier is optimized for storing data that is accessed frequently. The cool storage tier is optimized for storing data that is infrequently accessed and stored for at least a month. The archive tier is optimized for storing data that is rarely accessed and stored for at least six months with flexible latency requirements.
Note
If you want to set different tier on different blobs please set this positional parameter to None.
Then the blob tier on every BlobProperties will be taken.
- blobs
- str or dict(str, <xref:Any>) or BlobProperties
The blobs with which to interact. This can be a single blob, or multiple values can be supplied, where each value is either the name of the blob (str) or BlobProperties.
Note
When the blob type is dict, here's a list of keys, value rules.
blob name:
key: 'name', value type: str
standard blob tier:
key: 'blob_tier', value type: StandardBlobTier
rehydrate priority:
key: 'rehydrate_priority', value type: RehydratePriority
lease:
key: 'lease_id', value type: Union[str, LeaseClient]
snapshot:
key: "snapshot", value type: str
version id:
key: "version_id", value type: str
tags match condition:
key: 'if_tags_match_condition', value type: str
timeout for subrequest:
key: 'timeout', value type: int
- rehydrate_priority
- RehydratePriority
Indicates the priority with which to rehydrate an archived blob
- if_tags_match_condition
- str
Specify a SQL where clause on blob tags to operate only on blob with a matching value.
eg. "\"tagname\"='my tag'"
New in version 12.4.0.
- timeout
- int
Sets the server-side timeout for the operation in seconds. For more details see https://learn.microsoft.com/rest/api/storageservices/setting-timeouts-for-blob-service-operations. This value is not tracked or validated on the client. To configure client-side network timesouts see here.
- raise_on_any_failure
- bool
This is a boolean param which defaults to True. When this is set, an exception is raised even if there is a single operation failure.
Returns
An iterator of responses, one for each blob in order
Return type
upload_blob
Creates a new blob from a data source with automatic chunking.
upload_blob(name: str | BlobProperties, data: bytes | str | Iterable | IO, blob_type: str | BlobType = BlobType.BLOCKBLOB, length: int | None = None, metadata: Dict[str, str] | None = None, **kwargs) -> BlobClient
Parameters
- name
- str or BlobProperties
The blob with which to interact. If specified, this value will override a blob value specified in the blob URL.
- data
The blob data to upload.
- blob_type
- BlobType
The type of the blob. This can be either BlockBlob, PageBlob or AppendBlob. The default value is BlockBlob.
- length
- int
Number of bytes to read from the stream. This is optional, but should be supplied for optimal performance.
- overwrite
- bool
Whether the blob to be uploaded should overwrite the current data. If True, upload_blob will overwrite the existing data. If set to False, the operation will fail with ResourceExistsError. The exception to the above is with Append blob types: if set to False and the data already exists, an error will not be raised and the data will be appended to the existing blob. If set overwrite=True, then the existing append blob will be deleted, and a new one created. Defaults to False.
- content_settings
- ContentSettings
ContentSettings object used to set blob properties. Used to set content type, encoding, language, disposition, md5, and cache control.
- validate_content
- bool
If true, calculates an MD5 hash for each chunk of the blob. The storage service checks the hash of the content that has arrived with the hash that was sent. This is primarily valuable for detecting bitflips on the wire if using http instead of https, as https (the default), will already validate. Note that this MD5 hash is not stored with the blob. Also note that if enabled, the memory-efficient upload algorithm will not be used, because computing the MD5 hash requires buffering entire blocks, and doing so defeats the purpose of the memory-efficient algorithm.
- lease
- BlobLeaseClient or str
Required if the container has an active lease. Value can be a BlobLeaseClient object or the lease ID as a string.
- if_modified_since
- datetime
A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has been modified since the specified time.
- if_unmodified_since
- datetime
A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has not been modified since the specified date/time.
- etag
- str
An ETag value, or the wildcard character (*). Used to check if the resource has changed, and act according to the condition specified by the match_condition parameter.
- match_condition
- MatchConditions
The match condition to use upon the etag.
- if_tags_match_condition
- str
Specify a SQL where clause on blob tags to operate only on blob with a matching value.
eg. "\"tagname\"='my tag'"
New in version 12.4.0.
- timeout
- int
Sets the server-side timeout for the operation in seconds. For more details see https://learn.microsoft.com/rest/api/storageservices/setting-timeouts-for-blob-service-operations. This value is not tracked or validated on the client. To configure client-side network timesouts see here. This method may make multiple calls to the service and the timeout will apply to each call individually.
- premium_page_blob_tier
- PremiumPageBlobTier
A page blob tier value to set the blob to. The tier correlates to the size of the blob and number of allowed IOPS. This is only applicable to page blobs on premium storage accounts.
- standard_blob_tier
- StandardBlobTier
A standard blob tier value to set the blob to. For this version of the library, this is only applicable to block blobs on standard storage accounts.
- maxsize_condition
- int
Optional conditional header. The max length in bytes permitted for the append blob. If the Append Block operation would cause the blob to exceed that limit or if the blob size is already greater than the value specified in this header, the request will fail with MaxBlobSizeConditionNotMet error (HTTP status code 412 - Precondition Failed).
- max_concurrency
- int
Maximum number of parallel connections to use when the blob size exceeds 64MB.
Encrypts the data on the service-side with the given key. Use of customer-provided keys must be done over HTTPS. As the encryption key itself is provided in the request, a secure connection must be established to transfer the key.
- encryption_scope
- str
A predefined encryption scope used to encrypt the data on the service. An encryption scope can be created using the Management API and referenced here by name. If a default encryption scope has been defined at the container, this value will override it if the container-level scope is configured to allow overrides. Otherwise an error will be raised.
New in version 12.2.0.
- encoding
- str
Defaults to UTF-8.
A callback to track the progress of a long running upload. The signature is function(current: int, total: Optional[int]) where current is the number of bytes transfered so far, and total is the size of the blob or None if the size is unknown.
Returns
A BlobClient to interact with the newly uploaded blob.
Return type
Examples
Upload blob to the container.
with open(SOURCE_FILE, "rb") as data:
blob_client = container_client.upload_blob(name="myblob", data=data)
properties = blob_client.get_blob_properties()
walk_blobs
Returns a generator to list the blobs under the specified container. The generator will lazily follow the continuation tokens returned by the service. This operation will list blobs in accordance with a hierarchy, as delimited by the specified delimiter character.
walk_blobs(name_starts_with: str | None = None, include: str | List[str] | None = None, delimiter: str = '/', **kwargs: Any | None) -> ItemPaged[BlobProperties]
Parameters
- name_starts_with
- str
Filters the results to return only blobs whose names begin with the specified prefix.
Specifies one or more additional datasets to include in the response. Options include: 'snapshots', 'metadata', 'uncommittedblobs', 'copy', 'deleted', 'deletedwithversions', 'tags', 'versions', 'immutabilitypolicy', 'legalhold'.
- delimiter
- str
When the request includes this parameter, the operation returns a BlobPrefix element in the response body that acts as a placeholder for all blobs whose names begin with the same substring up to the appearance of the delimiter character. The delimiter may be a single character or a string.
- timeout
- int
Sets the server-side timeout for the operation in seconds. For more details see https://learn.microsoft.com/rest/api/storageservices/setting-timeouts-for-blob-service-operations. This value is not tracked or validated on the client. To configure client-side network timesouts see here.
Returns
An iterable (auto-paging) response of BlobProperties.
Return type
Feedback
Submit and view feedback for