@Rohit Kulkarni welcome to Microsoft Q&A.
The error message you’re encountering, HdfsBridge::recordReaderFillBuffer - Unexpected error encountered filling record reader buffer: HadoopExecutionException: Not enough columns in this line
, typically indicates a mismatch between the data structure in your source file and the structure expected by the SQL Data Warehouse.Here are a few things you could check:
- Data Types: Ensure that the data types in your source file match the data types in your SQL Data Warehouse. For example, if a column in your source file contains integer values but is defined as a string in your SQL Data Warehouse,For example, if a column in your source file contains integer values but is defined as a string in your SQL Data Warehouse, this could cause an error.
- Delimiter: Check the delimiter used in your source file. If the delimiter in your source file doesn’t match the one specified in your COPY command, this could lead to a column mismatch.
- Null or Missing Values: Make sure there are no null or missing values in your source file that could be causing the column count to be less than expected.
- File Format: If you’re using a specific file format like Parquet, make sure the file format is correctly specified in your COPY command.
If you’ve checked all of these and are still encountering the error, it might be helpful to look at a few rows of your source data to see if there’s anything unexpected that could be causing the issue.
Hope this helps. Do let us know if you any further queries.