ADF copy data step converts datetimestamp in source parquet-file to epoch timestamp

Luc Akkermans 20 Reputation points
2023-03-13T11:12:09.6833333+00:00

Hi!

In ADF I am trying to sink a parquet file from Data Lake storage gen2 to an Azure SQL server database. When I download parquet file from the datalake it contains a column with datetimestamp in format '2023-03-09 00:00:00' , this file is also written containing this column using .to_parquet python function in a Databricks notebook. However, when I preview this parquet file in the copy data step this column is shown in epoch format '1678320000000000' (this format is also show in the database afther the sink).

Can you help me resolving this using (so keeping this column in datetimestamp format) ?

Thanks in advance!

Azure Databricks
Azure Databricks
An Apache Spark-based analytics platform optimized for Azure.
Azure Data Factory
Azure Data Factory
An Azure service for ingesting, preparing, and transforming data at scale.
0 comments No comments
{count} votes

Answer accepted by question author
  1. HimanshuSinha 19,547 Reputation points Microsoft Employee Moderator
    2023-03-13T20:41:41.22+00:00

    Hello @Anonymous ,

    Thanks for the question and using MS Q&A platform.
    Mapping data flow ,does provide some internal functionality to make this change, but I think you are using the copy activity and so you can use a dynamic expression.

    @addseconds('1970-01-01T00:00:00Z',div(columanwithEpochdata,1000000))
    

    I did test something really quick, and I am confident that it should work.

    User's image

    Thanks
    Himanshu
    Please accept as "Yes" if the answer provided is useful , so that you can help others in the community looking for remediation for similar issues. 


0 additional answers

Sort by: Most helpful

Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.