Hello @jase jackson USA ,
There are scenarios where spark will be more efficient than standard SQL. For example, calculating a moving average in a large dataset will be always faster in spark (regardless of spark SQL or pyspark) than standard SQL. So the performance benefit of databricks is coming from the efficient usage of spark.
Deeper understanding of the queries & their plans, the data volumes and parallel execution possibilities for the query are some of the factors that'll come into play here. So, it's difficult to tell if performance will be better straight off the bat. Secondly, there may be some redevelopment effort involved here as well given syntax changes from SQL to Spark SQL.
Hope this helps. Do let us know if you any further queries.
------------
Please don’t forget to Accept Answer
and Up-Vote
wherever the information provided helps you, this can be beneficial to other community members.