Hello @Guilherme Gaspar Monteiro ,
Welcome to the Microsoft Q&A and thank you for posting your questions here.
To reuse a single Databricks Cluster Pool on an entire Azure Data Factory pipeline.
Due to your specific requirements and constraints, it seems that using a Databricks Workflow triggered by Azure Data Factory (ADF) might be the most suitable solution. Although your entire workflow is built on ADF, integrating Databricks Workflows into your pipeline through API calls provides a way to achieve the desired behavior without restarting the cluster pool for each task.
My best suggestion for best practices: Once you organize your workflow and set-up your Databricks cluster pool within the Databricks workspace. Use the Databricks REST API to trigger the Databricks Workflow from your Azure Data Factory pipeline. You can pass parameters from ADF to Databricks using the API call, after you must have allowed Databricks workflow to pass parameters. Then, you have to maintain the continuity of the cluster pool throughout the entire workflow by implement monitoring within your ADF pipeline, if the ADF serves as the trigger and monitoring mechanism, then it will avoid the startup time for each task.
I hope this is helpful! Do not hesitate to let me know if you have any other questions.
Please remember to "Accept Answer" if answer helped, so that others in the community facing similar issues can easily find the solution.
Best Regards,
Sina Salam