Hi PRADEEPCHEEKATLA-MSFT, this is almost the same solution I encountered in Data Science from Scratch
org.apache.spark.SparkException: Job aborted.
---------------------------------------------------------------------------
Py4JJavaError Traceback (most recent call last)
<command-2802701877950826> in <module>
9 df=spark.createDataFrame([Row(**i) for i in data])
10 df.show()
---> 11 df.write.mode("overwrite").json("wasbs://<file_system>@<storage-account-name>.blob.core.windows.net/hr/emp")
/databricks/spark/python/pyspark/sql/readwriter.py in json(self, path, mode, compression, dateFormat, timestampFormat, lineSep, encoding)
815 compression=compression, dateFormat=dateFormat, timestampFormat=timestampFormat,
816 lineSep=lineSep, encoding=encoding)
--> 817 self._jwrite.json(path)
818
819 @since(1.4)
/databricks/spark/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py in __call__(self, *args)
1255 answer = self.gateway_client.send_command(command)
1256 return_value = get_return_value(
-> 1257 answer, self.gateway_client, self.target_id, self.name)
1258
1259 for temp_arg in temp_args: