Hi Pednekar, Pratik ,
Welcome to Microsoft Q&A platform and thanks for posting your query here.
I understand that you are getting "LivyHttpRequestFailure" while running pyspark notebook in Synapse workspace.
It seems that this error occurred because there is huge input data to process and assigned apache spark pool might be insufficient to process such amount of data.
Could you please let us know what is the max node for your apache spark?
Kindly try to increase the number of nodes on apache spark pool and Enable AutoScaling .
Try choosing XL node size , if the issue still persists, try XXL
For more details, kindly check out the following documentation:
Node size of Apache Spark pool
AutoScale for Apache spark pool
Automatically scale Azure Synapse Analytics Apache Spark pools
Hope it helps. Kindly accept the answer by clicking on Accept answer button. Thankyou