Loading data dedicated sql Pool

Rohit Kulkarni 691 Reputation points
2024-04-04T10:41:37.8666667+00:00

Hello Team,

I am trying to load the data from Azure Blob Storage---->Dedicated SQL Pool

using copy activty

User's image

Source :

User's image

Target :

User's image

And getting error :

ErrorCode=PolybaseOperationFailed,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Error happened when loading data into SQL Data Warehouse. Operation: 'Polybase operation'.,Source=Microsoft.DataTransfer.ClientLibrary,''Type=System.Data.SqlClient.SqlException,Message=HdfsBridge::recordReaderFillBuffer - Unexpected error encountered filling record reader buffer: HadoopExecutionException: Not enough columns in this line.,Source=.Net SqlClient Data Provider,SqlErrorNumber=107090,Class=16,ErrorCode=-2146232060,State=1,Errors=[{Class=16,Number=107090,State=1,Message=HdfsBridge::recordReaderFillBuffer - Unexpected error encountered filling record reader buffer: HadoopExecutionException: Not enough columns in this line.,},],'

I have verified all the columns are equal.

Please advise.

Thanks

RK

Azure Synapse Analytics
Azure Synapse Analytics
An Azure analytics service that brings together data integration, enterprise data warehousing, and big data analytics. Previously known as Azure SQL Data Warehouse.
4,921 questions
{count} votes

1 answer

Sort by: Most helpful
  1. phemanth 10,480 Reputation points Microsoft Vendor
    2024-04-04T11:16:02.1866667+00:00

    @Rohit Kulkarni welcome to Microsoft Q&A.

    The error message you’re encountering, HdfsBridge::recordReaderFillBuffer - Unexpected error encountered filling record reader buffer: HadoopExecutionException: Not enough columns in this line, typically indicates a mismatch between the data structure in your source file and the structure expected by the SQL Data Warehouse.Here are a few things you could check:

    1. Data Types: Ensure that the data types in your source file match the data types in your SQL Data Warehouse. For example, if a column in your source file contains integer values but is defined as a string in your SQL Data Warehouse,For example, if a column in your source file contains integer values but is defined as a string in your SQL Data Warehouse, this could cause an error.
    2. Delimiter: Check the delimiter used in your source file. If the delimiter in your source file doesn’t match the one specified in your COPY command, this could lead to a column mismatch.
    3. Null or Missing Values: Make sure there are no null or missing values in your source file that could be causing the column count to be less than expected.
    4. File Format: If you’re using a specific file format like Parquet, make sure the file format is correctly specified in your COPY command.

    If you’ve checked all of these and are still encountering the error, it might be helpful to look at a few rows of your source data to see if there’s anything unexpected that could be causing the issue.

    Hope this helps. Do let us know if you any further queries.

    0 comments No comments

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.