Copy data from Teradata to snowflake

Aditya Raj 286 Reputation points
2021-08-18T14:50:08.847+00:00

I am running copy activity for copying data from teradata to snowflake. I am getting error : Operation on target Copy data1 failed: ErrorCode=UserErrorOdbcOperationFailed,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=ERROR [22000] NULL result in a non-nullable column
File 'sample.txt', line 2, character 1
Row 1, column "ABC"["Date":1]
If you would like to continue loading when an error is encountered, use other values such as 'SKIP_FILE' or 'CONTINUE' for the ON_ERROR option. For more information on loading options, please run 'info loading_data' in a SQL client.,Source=Microsoft.DataTransfer.Runtime.GenericOdbcConnectors,''Type=System.Data.Odbc.OdbcException,Message=ERROR [22000] NULL result in a non-nullable column
File 'sample.txt', line 2, character 1
Row 1, column "ABC"["Date":1]
If you would like to continue loading when an error is encountered, use other values such as 'SKIP_FILE' or 'CONTINUE' for the ON_ERROR option. For more information on loading options, please run 'info loading_data' in a SQL client.,Source=Snowflake,'

I tried with ON_ERROR option. It is skipping whole records. I am not sure why it is throwing this error. I can see data present for column for which it is saying NULL result in a non-nullable column

Azure Data Factory
Azure Data Factory
An Azure service for ingesting, preparing, and transforming data at scale.
10,196 questions
0 comments No comments
{count} votes

Accepted answer
  1. KranthiPakala-MSFT 46,442 Reputation points Microsoft Employee
    2021-08-19T01:28:55.697+00:00

    Hi @Aditya Raj

    Sorry for your experience and thanks for using Microsoft Q&A forum and posting your query.

    Since you have confirmed that the column values are not Null from source, I suspect that you might be doing an auto-mapping in your copy activity mapping section. If that is the case then possible reason could be because of auto column mapping behavior as it is case sensitive by default. For example if your payload, in source connector, the column name is "Date_Skey", but in sink connector if it is "DATE_SKEY", and if you are doing auto-mapping in copy activity, the column mapping will not able to map the 2 columns automatically and you will receive above error from snowflake side.

    If this is the case, as a workaround, you will need to map columns explicitly rather than auto mapping which would resolve the issue.

    If you are already doing explicit mapping (not auto-mapping) and experiencing this issue, then please provide your failed pipeline runID, failed activity runID and data factory region details so that we can work with internal team to check the logs and find the root cause of the issue.

    Hope this info helps. Do let us know how it goes.

    ----------

    • Please accept an answer if correct. Original posters help the community find answers faster by identifying the correct answer. Here is how.
    • Want a reminder to come back and check responses? Here is how to subscribe to a notification.

2 additional answers

Sort by: Most helpful
  1. Tarun Sareen 1 Reputation point
    2021-11-17T13:25:04.653+00:00

    Hi,

    We are facing the same issue while copying data from SQL Server to Snowflake, the only difference in our case is that issue arises only when we configure additional column fields. The moment we add an additional column, the rest of the column values are picking null values.

    Was there a workaround for this? (Except for the manual mapping of each table separately)

    Thanks

    0 comments No comments

  2. Macolino, Nick (Health) 1 Reputation point
    2022-01-15T03:01:48.003+00:00

    Yeah, came across the same issue yesterday whilst putting together a generic loading process from SqlServer --> Snowflake. Snowflake default to UPPER case for meta data (Makes sense right)
    The ADF auto mapping process expects to see the same column name on either side. In my case I don't define a mapping and let the results of the query or structure of the blob file determine the columns.
    The solution is to create your Snowflake table with the same case structure as the SqlServer object. Just enclose the column name in quotes ie."column name".
    Also might want to state to avoid any data type issues I create each snowflake column as just "column name" varchar.
    No defined length and let snowflake default it to the max. You'll have to deal with all the weird logic between staging and your next hop transformation object.

    If you are following a pattern of ADF --> Blob --> Snowflake Staging.
    You can then transform the data into its required structure in the next hop.
    If Snowflake was your target object with no additional transformation then this solution wont fit your pattern.

    Regards
    Nick Mac.

    0 comments No comments