Hi @Vaibhav
Thank you for reaching out to us with your query related to loading CSV files from an SFTP location to a data lake using Azure Data Factory (ADF). I understand that you are facing issues with the delimiter in the source data and have come up with a solution to read the file without any delimiter, load it into the data lake, use data flow to replace the delimiter with a hyphen, split the values into multiple columns, and sink the data in CSV format. I'll be happy to assist you with your questions related to this solution.
Regarding your first question, there is no direct way to handle this issue in ADF without using additional services. However, you can try using the "Text Delimiter" option in the dataset to specify a different delimiter that is not present in the source data. This will ensure that the data is loaded correctly into the data lake without any issues. You can also try using the "Escape Character" option to escape the delimiter in the source data.
For your second question, you can use dynamic column mapping in the data flow to avoid giving column names. You can use the "Derived Column" transformation to split the column into multiple columns dynamically. You can also use parameters in the data flow to make it more flexible and reusable.
Regarding your third question, you can use the "File Name" option in the sink dataset to specify the file name dynamically based on the source file name. You can also use the "Partition by" option in the sink dataset to partition the data based on a specific column. This will ensure that the data is written to the correct file and partitioned correctly.
I hope this information helps you. If you have any further questions or concerns, please feel free to ask. We are always here to assist you.