Hello man,
Thank you for reaching out on the Microsoft Q&A portal and for providing such a detailed description of the issue. I understand that you're facing a critical data precision issue when copying high-precision numeric data from Oracle to Azure SQL. It's certainly a common challenge, but we can definitely work through it.
The error TypeConversionFailure and the subsequent data mismatch, even after using ROUND, point to an issue where the data type is being misinterpreted at some point during the data movement process. The Oracle NUMBER type can have a floating precision that doesn't always map perfectly to the fixed NUMERIC(38, 18) in Azure SQL, especially as it passes through the Azure Data Factory (ADF) integration runtime.
When ADF reads the data, its internal engine infers a very high-precision data type (as seen in your error message), which then fails to convert to the target's fixed precision and scale. Your ROUND function was a good idea, but the incorrect results suggest that a loss of precision is happening after the query is executed, likely due to an implicit cast to a less precise type like a float or double within the ADF pipeline before it reaches the sink.
To ensure a successful and accurate data copy, we need to be more explicit about the data type before it even leaves the Oracle source.
Explicit Casting in the Source Query
Instead of relying on ADF to correctly interpret the rounded number, you should explicitly cast the column to the exact precision and scale that matches your Azure SQL target table.
Modify your source query in the ADF Copy Activity to use the CAST function. This forces Oracle to format the data correctly before sending it to ADF.
- Update the Source Query in ADF
- Go to your ADF Copy Activity settings.
- In the "Source" tab, update your query to cast each numeric column.
By usingSELECTCAST(... AS NUMBER(38, 18)), you are instructing the Oracle database itself to handle the rounding and truncation. The data is delivered to ADF in the preciseNUMBER(38, 18)format, which maps cleanly to Azure SQL'sNUMERIC(38, 18), eliminating any ambiguity or risk of incorrect type inference by the ADF runtime. - Go to your ADF Copy Activity settings.
Here are a few other things to verify to ensure your pipeline is robust.
- In the Mapping tab of your Copy Activity, ensure that the data types are correctly inferred. After changing the source query with the
CASTfunction, you can click "Import schemas" again. The source type should now be recognized with the correct precision, and it should map directly to theDecimaltype with a precision of 38 and a scale of 18 on the sink side. - If you are using a staging account or PolyBase for this copy operation, be aware that these intermediate steps have their own data type conversion rules that can sometimes cause issues. The explicit
CASTin the source query is the best way to mitigate this, as it prepares the data correctly before it ever hits the staging environment. For a quick test, you could try running the copy without staging to see if the behavior changes, which would confirm if the issue lies with the staging process.
I am confident that implementing the CAST function in your source query will resolve both the pipeline failure and the data mismatch issues you are observing.
Thank you again for asking your question. Please let us know if this solution works for you or if you have any further questions!
Helpful References:
- Azure Data Factory - Oracle Connector Data Type Mapping
- Azure Data Factory - Azure SQL Database Connector Data Type Mapping
Please "Accept as Answer" or click 'Thumbs Up YES' if the answer provided is useful, so that you can help others in the community looking for remediation for similar issues.
Thanks
Pratyush