Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
Important
This documentation has been retired and might not be updated. The products, services, or technologies mentioned in this content are no longer supported.
In this archive, you can find earlier versions of documentation for Azure Databricks products, features, APIs, and workflows.
Administration
Compute
- Create cluster UI (legacy)
- Cluster UI preview
- Install a library with an init script (legacy)
- Cluster-named init scripts (legacy)
- Global init scripts (legacy)
Dev tools
- Drop Delta table features (legacy)
- Legacy UniForm IcebergCompatV1
- Read Databricks tables from Apache Iceberg clients (legacy)
- Transactional writes to cloud storage with DBIO
- Hive table (legacy)
- Skew join optimization using skew hints
- Koalas
- Manage libraries with
%conda
commands (legacy) - Workspace libraries (legacy)
- Explore and create tables in DBFS
- FileStore
- Browse files in DBFS
- Legacy Databricks CLI
- What is dbx by Databricks Labs?
- dbutils.library
- Migrate to Spark 3.x
- VSCode with Git folders
- VSCode workspace directory
- Pulumi Databricks resource provider
Governance
- External metastores (legacy)
- Create Unity Catalog managed storage using a service principal (legacy)
- Credential passthrough (legacy)
Machine learning and AI
- Optimized LLM serving
- Migrate optimized LLM serving endpoints to provisioned throughput
- Model serving (legacy)
- Share feature tables across workspaces (legacy)
- MLeap ML model export
- Distributed training with TensorFlow 2
- Horovod
- Model inference using Hugging Face Transformers for NLP
- Train a PySpark model and save in MLeap format
- Set up and considerations for
ai_generate_text()
- Analyze customer reviews with
ai_generate_text()
and OpenAI - Apache Spark MLlib and automated MLflow tracking
- Load data using Petastorm
MLflow
Notebooks
Release notes
Repos and Git source control
Resources
Security
Storage
- Connecting Azure Databricks and Azure Synapse with PolyBase (legacy)
- Azure Blob storage file source with Azure Queue Storage (legacy)
- Azure Cosmos DB
- Structured Streaming writes to Azure Synapse
- Neo4j
- Read and write XML data using the
spark-xml
library - Connect to external systems
- Query databases using JDBC
- Query PostgreSQL with Azure Databricks
- Query MySQL with Azure Databricks
- Query MariaDB with Azure Databricks
- Query SQL Server with Azure Databricks
- Use the Databricks connector to connect to another Databricks workspace
- Amazon S3 Select
- MongoDB
- Cassandra
- Couchbase
- ElasticSearch
- Google BigQuery
- Read and write data from Snowflake
- Query data in Azure Synapse Analytics
- Connect to Azure Synapse Analytics dedicated pool
- Query Amazon Redshift using Azure Databricks
- SQL Databases using the Apache Spark connector
- Configure Delta storage credentials
- Connect to Azure Blob Storage with WASB (legacy)