For me it is solved.
Create a new environment for Spark. It can be created from Fabric's notebook itself. Run/execute the notebook once selecting the new environment. Then we can execute the notebook from any pipeline.
This browser is no longer supported.
Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.
I have tried to run the PySpark Notebook on the Lakehouse. I got the error on the below
SparkCoreError/other: Livy session has failed. Error code: SparkCoreError/other. SessionInfo.State from SparkCore is Error: Failed to add required system configs for operationId 42c12ecc-0da0-4e05-a068-e29402888cc9. Error: Validation exception: {0}. Source: SparkCoreService.
When I click on the Session-Id on the bottom left get the error below instead of the Notebook logs.
Error: Can't get workspace id and capacity id for artifact with id 51e6da09-c6dc-4a25-b0ac-b8faa7897384....
My subscription is a Pro account. I have tried it on my free trial account. The notebook can run very well. I don't know what is the difference between a Pro account and a trial account.
Could you help me how I can solve this error?
For me it is solved.
Create a new environment for Spark. It can be created from Fabric's notebook itself. Run/execute the notebook once selecting the new environment. Then we can execute the notebook from any pipeline.