Welcome to the Microsoft Q&A and thank you for posting your questions here.
Please check the below steps and confirm us
Increase the Session Timeout: You can extend the session timeout by setting the livy.server.session.timeout property in your Spark configuration. For example, you can set it to 4 hours:
spark.conf.set("livy.server.session.timeout", "4h")
This should give your long-running jobs more time to complete
Optimize Your Spark Job: Try to optimize your Spark job to reduce execution time. This can include:
- Improving the efficiency of your Spark code.
- Tuning Spark configurations.
- Increasing the resources allocated to your Spark job1.
Check Network Connectivity: Ensure there are no network connectivity issues between the Livy server and the Spark driver. This includes checking firewall and network settings
Use a Different Spark Pool: If possible, try running your Spark job on a different Spark pool to see if that resolves the issue
Review Spark Logs: Check the Spark logs for any errors or warnings that might be causing the issue. You can access these logs from the Azure Synapse Analytics workspace or the Azure portal
I hope the above steps will resolve the issue, please do let us know if issue persists. Thank you