Hello @tms345 ,
Welcome to the Microsoft Q&A platform.
Yes, you are correct you can only track cost against the resource group. Unfortunately, there is no way to track databricks cost at a job or user level when cluster is running.
What is the difference between jobs compute and All-Purpose compute workloads?
The Jobs Compute workload is defined as a job that both starts and terminates the cluster on which it runs. For example, a workload may be triggered by the Databricks job scheduler, which launches a new Apache Spark cluster solely for the job and automatically terminates the cluster after the job is complete.
The All-Purpose Compute workload is any workload that is not an automated workload, for example, running a command within Databricks notebooks. These commands run on Apache Spark clusters which may persist until manually terminated. Multiple users can share a cluster to perform interactive analysis collaboratively.
For more details, refer Azure Databricks pricing - FAQ
To monitor cost and accurately attribute Azure Databricks usage to your organization’s business units and teams (for chargebacks, for example), you can tag workspaces (resource groups), clusters, and pools.
Note: Using Tags feature which we can apply the filter and get the charges for the specific cluster.
Reference: Monitor usage using cluster, pool, and workspace tags.
Hope this helps. Do let us know if you any further queries.
------------
Please don’t forget to Accept Answer
and Up-Vote
wherever the information provided helps you, this can be beneficial to other community members.