Like the error message indicates, you are reading data in Parquet format and writing to a (Delta I guess) table when you get a Parquet column cannot be converted error message.
The vectorized Parquet reader is decoding the type column to a binary format.
So if you have decimal type columns in your source data, you should disable the vectorized Parquet reader.
https://kb.databricks.com/scala/spark-job-fail-parquet-column-convert