@Jon Z - Thanks for the question and using MS Q&A platform.
UPDATE: You can use Azure Synapse Dedicated SQL Pool Connector for Apache Spark to read dedicated sql pool tables in spark notebook.
%%spark
val df = spark.read.synapsesql("sample.dbo.employees")
df.show()
Please do checkout the MS Q&A thread addressing similar issue: https://learn.microsoft.com/en-us/answers/questions/521273/synapse-spark-notebook-reading-dedicated-pool-usin
It seems that you are trying to query a table in your dedicated pool but it is not finding the table in the schema. You have also tried to query using the name of the database.schema.table but it is giving you an error.
To query a table in your dedicated pool, you can use the following syntax:
SELECT * FROM <schema>.<table>
Make sure to replace schema
and table
with the actual schema and table names respectively. If you are still getting an error, please make sure that the schema and table names are correct and that you have the necessary permissions to access the table.
Regarding the error you received when trying to query using the name of the database.schema.table, it seems that the spark_catalog
requires a single-part namespace. You can try using only the schema and table names without the database name.
If you are still having trouble, please provide more information about your query and the schema and table you are trying to access so that I can assist you better.
For more details, refer to Create, develop, and maintain Synapse notebooks in Azure Synapse Analytics.
Hope this helps. Do let us know if you any further queries.
If this answers your query, do click Accept Answer
and Yes
for was this answer helpful. And, if you have any further query do let us know.