Hello @Ashish Sinha ,
Welcome to the Microsoft Q&A platform.
Make sure the data frame column data types must match with the column data types in the target table.
Schema enforcement, also known as schema validation, is a safeguard in Delta Lake that ensures data quality by rejecting writes to a table that doesn’t match the table’s schema. To determine whether a write to a table is compatible, Delta Lake uses the following rules to determine whether a write from a DataFrame to a table is compatible:
- All Dataframe Columns must exist in the target table. If there is a column in Dataframe that is not present in the target table, an exception will raise. Columns present in the target table but not in the Dataframe are set to null.
- Dataframe column data types must match the column data types in the target table. If they don’t match, an exception is raised.
- Dataframe column names cannot differ only by case. This means that we cannot have columns such as ‘Foo’ and ‘foo’ defined in the same table. While Spark can be used in case sensitive or insensitive (default) mode, Delta Lake is case-preserving but insensitive when storing the schema.
For more details, refer to Table batch reads and writes – Schema validation.
Hope this helps. Do let us know if you any further queries.
------------
- Please accept an answer if correct. Original posters help the community find answers faster by identifying the correct answer. Here is how.
- Want a reminder to come back and check responses? Here is how to subscribe to a notification.