Hi @Harsh Khewal Greetings! Thank you for posting your question on this forum.
When you create an Azure function using the blob_trigger
type, the function processes all the files which are already present in the container. The Azure function will also automatically trigger when new files are added to the linked container. You have to provide the container name in the path
configuration of app.blob_trigger
The document Get started with Document Intelligence provides a sample Python code that you can incorporate inside your blob_trigger function definition to perform document analysis on the files. The above code sample uses URL source to process the files. You can get this information using the Blob service client. Please refer the below sample
import azure.functions as func
from azure.storage.blob import BlobServiceClient
import logging
import os
app = func.FunctionApp()
# Create a BlockBlobService object
blob_service_client = BlobServiceClient.from_connection_string(os.environ['lsayanastorage_STORAGE'])
@app.blob_trigger(arg_name="myblob", path="testblobtrigger",
connection="lsayanastorage_STORAGE")
def lsayanablob_trigger(myblob: func.InputStream):
blob_client = blob_service_client.get_blob_client(container="testblobtrigger", blob=myblob.name)
blob_url = blob_client.url
#Include code from Document intelligent SDK
#feed the above blob_url to fromURL in analyze_layout()
logging.info(f"Python blob trigger function processed blob"
f"Name: {myblob.name}"
f"Blob URL is {blob_url}"
f"Blob Size: {myblob.length} bytes")
Hope this helps. Please let us know if you need any additional assistance or further clarification.
If the response helped, please do click Accept Answer and Yes for the answer provided. Doing so would help other community members with similar issue identify the solution. I highly appreciate your contribution to the community.