Hello Hari ,
This seems to be the expected behavior as Spark is using HDFS library for writing the file on the ADLS G1.
The reason spark is sending calls with 644 permissions for new files and why its not honoring the default permissions from the parent folder is because we apply the "spark.hadoop.fs.permissions.umask-mode" (default 022) mask. You can read more here
We may try to reset the permission on ADLS folder/file through code or using Azure CLI.
We tried below on Databricks for resetting perm on ADLS G1 & G2. It worked .
%scala
import java.io._
import org.apache.hadoop.fs.{Path, FileStatus}
import org.apache.hadoop.fs.permission.FsPermission
def setPermissions(stringPath: String, stringPermissions: String): Unit = {
val conf = spark.sessionState.newHadoopConf()
val path = new Path(stringPath)
val fs = path.getFileSystem(conf)
val perms = new FsPermission(stringPermissions)
fs.setPermission(path, perms)
}
setPermissions("adl:/yourfilepath.ext", "777")
Please let me know how it goes .
Thanks
Himanshu
Please do consider to click on "Accept Answer" and "Up-vote" on the post that helps you, as it can be beneficial to other community members