Share via

Microsoft Purview - Power BI Schema Not Visible After Ingestion From Fabric

Adam Drummond 0 Reputation points
2026-03-16T15:18:21.4033333+00:00

Hi

I am currently having an issue with Purview and would like some clarity. I have ingested several thousand assets from Fabric into Purview. While cataloguing these I recently noticed that a small number of ingested Power BI Datasets (Semantic Models) do not display their schema in Purview, when I open the schema tab I get a message saying "No Schema Found". If I open the same asset in Fabric though I can see the whole schema with table names, column names and descriptions. Could someone please explain how schemas are ingested from Fabric and displayed in Purview? Many of these schemas are built from Databricks tables and are in a workspace where the majority of ingested datasets do actually show their schema in Purview.

I have come across the following pieces of info and would also like these confirmed if possible:

  1. Purview only ingests item level metadata from Fabric, i.e. name, workspace, lineage etc., it does not ingest tabular metadata, i.e. schema information. Schema are classified and ingested by an API that scans the source rather than Fabric, i.e. it would look to Databricks to define and ingest the schema.
  2. Import mode vs Direct Query vs Direct Lake can affect how and what is ingested.
  3. The API scanning the schema does not like native queries, multiple SQL endpoints or views. If the Fabric schema is pulling data from Databricks via multiple SQL endpoints, views or native queries then the schema ingestion to Purview will fail.
  4. There are approved sources that Purview can ingest schema info from. If even one table from the Fabric schema is not from an approved source then it cannot be ingested.

If the above numbered points are true then how would be best to ingest and visualise these schema? Any help or clarity here would be appreciated.

Microsoft Security | Microsoft Purview

2 answers

Sort by: Most helpful
  1. Yutaka_K_JP 1,655 Reputation points
    2026-03-20T05:05:21.21+00:00

    I think that the schema disappears because Purview only surfaces it when the PowerBI metadata API can statically read the semantic model as a single lineage path, so the dependable fix is keeping all tables on one SQL endpoint and pushing any transformations into a simple, stable view.

    0 comments No comments

  2. Smaran Thoomu 34,960 Reputation points Microsoft External Staff Moderator
    2026-03-17T06:52:24.16+00:00

    @Adam Drummond Hey Adam, it sounds like you’ve got Power BI semantic models in Fabric that show up in Purview with “No Schema Found.” Here’s the deal under the covers:

    • Purview’s Fabric/Power BI connector harvests item-level metadata (name, workspace, lineage) directly via the Power BI/Fabric APIs.

    • Tabular metadata (tables & columns) for Semantic Models is ingested via the Power BI REST metadata API, not by crawling Databricks yourself.

    • Only supported storage modes and connectors get scanned for schema—typically Import and DirectQuery models. Direct Lake or models using native queries, multiple SQL endpoints, M scripts or unsupported connectors will result in no schema being pulled.

    • If even one table in the model uses an unsupported source, the scanner may drop the entire schema (there’s also a 1 MB payload/800-column limit to watch out for).

    What you can try:

    1. Check the Storage Mode on the failing datasets (Import vs DirectQuery vs Direct Lake).
    2. In your Purview scan settings, turn Advanced Resource Sets → Include schema on, re-run the scan, then wait ~12 hours for the harvest to appear.
    3. Review the scan job’s warnings/errors in Purview—look for skipped tables or connector-unsupported messages.
    4. Make sure your model doesn’t exceed the 1 MB payload or 800-column schema limit.

    If you need full schema ingestion, you’ll want your model in Import or DirectQuery mode against a supported source (e.g. Azure SQL/Synapse, ADLS Gen2). Otherwise the scanner won’t bring the schema into Purview.

    Follow-up questions to help narrow this down:

    1. Which storage mode are the non-visible models using?
    2. How many tables and columns do those models have (roughly)?
    3. Are there native SQL queries, M scripts or views in those models?
    4. Which connector types are your models using (e.g. Databricks Spark, SQL, ODBC)?

    Hope that gives you clarity—let me know those details and we can dig deeper!

    Reference links:

    Note: This content was drafted with the help of an AI system. Please verify the information before relying on it for decision-making.


Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.