An Apache Spark-based analytics platform optimized for Azure.
Hi Raja Asokan Charuvil
Good question - and you’re absolutely right in your observation. At this time, the system.billing.usage table doesn’t include per-table details for Lakehouse Monitoring workloads. The usage_metadata fields such as uc_table_name, uc_table_schema, and uc_table_catalog are not yet populated for the LAKEHOUSE_MONITORING billing origin.
In other words, it’s currently not possible to break down Lakehouse Monitoring costs by individual Unity Catalog tables. The cost data is aggregated at the feature or SKU level (for example, JOBS_SERVERLESS or LAKEHOUSE_MONITORING) rather than by table.
If you need some level of attribution, one option is to correlate job or task run IDs from your monitoring jobs (using system.job_runs) and map them to your table configurations. That won’t give you precise billing data per table, but it can help approximate usage.
This is a known limitation, and Databricks is looking at extending metadata coverage for Lakehouse Monitoring in future updates.
Hope this helps. Do let us know if you any further queries.
If this answers your query, do click Accept Answer and Yes for was this answer helpful. And, if you have any further query do let us know.