Welcome to Microsoft Q&A platform and thanks for posting your question here.
As per my understanding , you are trying to load the output of parquet file from one container to another using synapse analytics. Please correct me if my understanding is wrong.
You can consider using Notebook in synapse analytics where you can write pyspark code to create a dataframe and load the output of parquet file into the same dataframe and then write that dataframe into blob storage in specified file format.
Note: Kindly make sure that your Azure AD user has read/write permissions on the storage account.
Kindly check the below code where I am trying to read parquet file and writing it as csv in the storage account.
For more details, kindly check : Read & write parquet files using Apache Spark in Azure Synapse Analytics
Hope this will help. Please let us know if any further queries.
------------------------------
- Please don't forget to click on
or upvote
button whenever the information provided helps you.
Original posters help the community find answers faster by identifying the correct answer. Here is how - Want a reminder to come back and check responses? Here is how to subscribe to a notification
- If you are interested in joining the VM program and help shape the future of Q&A: Here is how you can be part of Q&A Volunteer Moderators