Hi javier cohenar ,
Thankyou for using Microsoft Q&A platform and thanks for posting your question here.
As per my understanding, you are getting internal server error while using mapping dataflow in Azure datafactory pipeline. Please let me know if that is not the case.
Kindly try creating integration runtime with higher compute size and use memory optimized compute type while executing dataflow activity and see if it helps.
Specific scenarios that can cause internal server errors are shown as follows.
Scenario 1: Not choosing the appropriate compute size/type and other factors
Successful execution of data flows depends on many factors, including the compute size/type, numbers of source/sinks to process, the partition specification, transformations involved, sizes of datasets, the data skewness and so on.
For more guidance, see Integration Runtime performance.
Scenario 2: Using debug sessions with parallel activities
When triggering a run using the data flow debug session with constructs like ForEach in the pipeline, multiple parallel runs can be submitted to the same cluster. This situation can lead to cluster failure problems while running because of resource issues, such as being out of memory.
To submit a run with the appropriate integration runtime configuration defined in the pipeline activity after publishing the changes, select Trigger Now or Debug > Use Activity Runtime.
Scenario 3: Transient issues
Transient issues with microservices involved in the execution can cause the run to fail.
Configuring retries in the pipeline activity can resolve the problems caused by transient issues. For more guidance, see Activity Policy.
For more details, kindly refer to below resources:
Internal server errors | Troubleshoot mapping data flows in Azure Data Factory
Hope it helps. Kindly accept the answer by clicking on Accept answer
button. Thankyou