I think you are having this issue because Unity Catalog requires storage paths to be on external storage registered with Unity Catalog (for example Azure Data Lake Storage Gen2, AWS S3...) and does not support dbfs:/
paths for managed tables.
Unity Catalog-managed tables must use an external storage location registered with Unity Catalog.
SHOW STORAGE LOCATIONS;
I you want to save the repartitioned data to an external storage location that is registered with Unity Catalog.
df = spark.read.format("parquet").load("dbfs:/path-to-your-data")
# Repartitioning the data
df = df.repartition("column_to_partition")
# Write data to Unity Catalog-compliant storage path
df.write.format("delta").mode("overwrite").save("abfss://******@your-storage-account.dfs.core.windows.net/your-folder")
Once the data is saved to a Unity Catalog-compatible location, create the table:
CREATE TABLE catalog_name.schema_name.table_name
USING DELTA
LOCATION 'abfss://******@your-storage-account.dfs.core.windows.net/your-folder';
After creating the table, you can perform Z-Ordering for performance optimization :
OPTIMIZE catalog_name.schema_name.table_name
ZORDER BY (column_name);
Don't forget that Azure Data Lake storage account and container must have appropriate permissions for Unity Catalog and your Databricks workspace.