Parquet files are more kind of rigid in terms of schema, unlike CSV files. Make sure that the schema created (or inferred) during the Copy Data step for the Parquet output file matches precisely with the data types and column names you expect. Omission of any item may cause trouble with writing data correctly.
In the Copy Data activity, paying particular attention to the case when the source is extended with one additional column, the source to sink column mapping has to be correct set-up. This also covers the other section. For parquet sinks, make sure the mapping is explicitly included for all 16 columns and that the data types are compatible.
If available, please utilize the preview feature of Azure Data Factory to look into the output of the source dataset as well as the transformation/query output before it is written to the sink. Note: This feature is only available in ADF source datasets. This can also help know if the problem exists before the data sent to the sink. https://learn.microsoft.com/en-us/azure/data-factory/connector-troubleshoot-parquet