הערה
הגישה לדף זה מחייבת הרשאה. באפשרותך לנסות להיכנס או לשנות מדריכי כתובות.
הגישה לדף זה מחייבת הרשאה. באפשרותך לנסות לשנות מדריכי כתובות.
Important
Lakebase Autoscaling is available in the following regions: eastus, eastus2, centralus, southcentralus, westus, westus2, canadacentral, brazilsouth, northeurope, uksouth, westeurope, australiaeast, centralindia, southeastasia.
Lakebase Autoscaling is the latest version of Lakebase, with autoscaling compute, scale-to-zero, branching, and instant restore. If you are a Lakebase Provisioned user, see Lakebase Provisioned.
This page provides an overview of the Lakebase Autoscaling API, including authentication, available endpoints, and common patterns for working with the REST API, Databricks CLI, and Databricks SDKs (Python, Java, Go).
For the complete API reference, see the Postgres API documentation.
Important
The Lakebase Postgres API is in Beta. API endpoints, parameters, and behaviors are subject to change.
Authentication
The Lakebase Autoscaling API uses workspace-level OAuth authentication for managing project infrastructure (creating projects, configuring settings, etc.).
Note
Two types of connectivity: This API is for platform management (creating projects, branches, computes). For database access (connecting to query data):
- SQL clients (psql, pgAdmin, DBeaver): Use Lakebase OAuth tokens or Postgres passwords. See Authentication.
- Data API (RESTful HTTP): Use Lakebase OAuth tokens. See Data API.
- Programming language drivers (psycopg, SQLAlchemy, JDBC): Use Lakebase OAuth tokens or Postgres passwords. See Quickstart.
For a complete explanation of these two authentication layers, see Authentication architecture.
Set up authentication
Authenticate using the Databricks CLI:
databricks auth login --host https://your-workspace.cloud.databricks.com
Follow the browser prompts to log in. The CLI caches your OAuth token at ~/.databricks/token-cache.json.
Then choose your access method:
Python SDK
The SDK uses unified authentication and automatically handles OAuth tokens:
from databricks.sdk import WorkspaceClient
w = WorkspaceClient()
Java SDK
The SDK uses unified authentication and automatically handles OAuth tokens:
import com.databricks.sdk.WorkspaceClient;
WorkspaceClient w = new WorkspaceClient();
CLI
Commands automatically use the cached token:
databricks postgres list-projects
curl
Generate a token for direct API calls:
export DATABRICKS_TOKEN=$(databricks auth token | jq -r .access_token)
curl -X GET "https://your-workspace.cloud.databricks.com/api/2.0/postgres/projects" \
-H "Authorization: Bearer ${DATABRICKS_TOKEN}"
OAuth tokens expire after one hour. Regenerate as needed.
For more details, see Authorize user access to Databricks with OAuth.
Available endpoints (Beta)
All endpoints use the base path /api/2.0/postgres/.
Projects
| Operation | Method | Endpoint | Documentation |
|---|---|---|---|
| Create project | POST |
/projects |
Create a project |
| Update project | PATCH |
/projects/{project_id} |
General settings |
| Delete project | DELETE |
/projects/{project_id} |
Delete a project |
| Get project | GET |
/projects/{project_id} |
Get project details |
| List projects | GET |
/projects |
List projects |
Branches
| Operation | Method | Endpoint | Documentation |
|---|---|---|---|
| Create branch | POST |
/projects/{project_id}/branches |
Create a branch |
| Update branch | PATCH |
/projects/{project_id}/branches/{branch_id} |
Update branch settings |
| Delete branch | DELETE |
/projects/{project_id}/branches/{branch_id} |
Delete a branch |
| Get branch | GET |
/projects/{project_id}/branches/{branch_id} |
View branches |
| List branches | GET |
/projects/{project_id}/branches |
List branches |
Endpoints (Computes and Read Replicas)
| Operation | Method | Endpoint | Documentation |
|---|---|---|---|
| Create endpoint | POST |
/projects/{project_id}/branches/{branch_id}/endpoints |
Create a compute / Create a read replica |
| Update endpoint | PATCH |
/projects/{project_id}/branches/{branch_id}/endpoints/{endpoint_id} |
Edit a compute / Edit a read replica |
| Delete endpoint | DELETE |
/projects/{project_id}/branches/{branch_id}/endpoints/{endpoint_id} |
Delete a compute / Delete a read replica |
| Get endpoint | GET |
/projects/{project_id}/branches/{branch_id}/endpoints/{endpoint_id} |
View computes |
| List endpoints | GET |
/projects/{project_id}/branches/{branch_id}/endpoints |
View computes |
Roles
| Operation | Method | Endpoint | Documentation |
|---|---|---|---|
| List roles | GET |
/projects/{project_id}/branches/{branch_id}/roles |
View Postgres roles |
| Create role | POST |
/projects/{project_id}/branches/{branch_id}/roles |
Create an OAuth role | Create a password role |
| Get role | GET |
/projects/{project_id}/branches/{branch_id}/roles/{role_id} |
View Postgres roles |
| Update role | PATCH |
/projects/{project_id}/branches/{branch_id}/roles/{role_id} |
Update a role |
| Delete role | DELETE |
/projects/{project_id}/branches/{branch_id}/roles/{role_id} |
Delete a role |
Database Credentials
| Operation | Method | Endpoint | Documentation |
|---|---|---|---|
| Generate database credential | POST |
/credentials |
OAuth token authentication |
Operations
| Operation | Method | Endpoint | Documentation |
|---|---|---|---|
| Get operation | GET |
/projects/{project_id}/operations/{operation_id} |
See example below |
Permissions
Project ACL permissions use the standard Azure Databricks Permissions API, not the /api/2.0/postgres/ base path. Set the request_object_type to database-projects and request_object_id to the project ID.
| Operation | Method | Endpoint | Documentation |
|---|---|---|---|
| Get project permissions | GET |
/api/2.0/permissions/database-projects/{project_id} |
Permissions API reference |
| Update project permissions | PATCH |
/api/2.0/permissions/database-projects/{project_id} |
Permissions API reference |
| Replace project permissions | PUT |
/api/2.0/permissions/database-projects/{project_id} |
Permissions API reference |
The grantable permission levels for Lakebase projects are CAN_USE and CAN_MANAGE. CAN_CREATE is an inherited level and cannot be set via the API. See Permission levels.
For usage examples and CLI/SDK/Terraform equivalents, see Grant permissions programmatically.
Get operation
Check the status of a long-running operation by its resource name.
Python SDK
from databricks.sdk import WorkspaceClient
w = WorkspaceClient()
# Start an operation (example: create project)
operation = w.postgres.create_project(...)
print(f"Operation started: {operation.name}")
# Wait for completion
result = operation.wait()
print(f"Operation completed: {result.name}")
Java SDK
import com.databricks.sdk.WorkspaceClient;
import com.databricks.sdk.service.postgres.*;
WorkspaceClient w = new WorkspaceClient();
// Start an operation (example: create project)
CreateProjectOperation operation = w.postgres().createProject(...);
System.out.println("Operation started: " + operation.getName());
// Wait for completion
Project result = operation.waitForCompletion();
System.out.println("Operation completed: " + result.getName());
CLI
The CLI automatically waits for operations to complete by default. Use --no-wait to skip polling:
# Create project without waiting
databricks postgres create-project --no-wait ...
# Later, check the operation status
databricks postgres get-operation projects/my-project/operations/abc123
curl
# Get operation status
curl -X GET "$WORKSPACE/api/2.0/postgres/projects/my-project/operations/abc123" \
-H "Authorization: Bearer ${DATABRICKS_TOKEN}" | jq
Response format:
{
"name": "projects/my-project/operations/abc123",
"done": true,
"response": {
"@type": "type.googleapis.com/databricks.postgres.v1.Project",
"name": "projects/my-project",
...
}
}
Fields:
done:falsewhile in progress,truewhen completeresponse: Contains the result whendoneistrueerror: Contains error details if operation failed
Common patterns
Resource naming
Resources follow a hierarchical naming pattern where child resources are scoped to their parent.
Projects use this format:
projects/{project_id}
Child resources like operations are nested under their parent project:
projects/{project_id}/operations/{operation_id}
This means you need the parent project ID to access operations or other child resources.
Resource IDs:
When creating resources, you must provide a resource ID (like my-app) for the project_id, branch_id, or endpoint_id parameter. This ID becomes part of the resource path in API calls (such as projects/my-app/branches/development).
You can optionally provide a display_name to give your resource a more descriptive label. If you don't specify a display name, the system uses your resource ID as the display name.
:::tip Finding resources in the UI
To locate a project in the Lakebase UI, look for its display name in the projects list. If you didn't provide a custom display name when creating the project, search for your project_id (such as "my-app").
:::
Note
Resource IDs cannot be changed after creation.
Requirements:
- Must be 1-63 characters long
- Lowercase letters, digits, and hyphens only
- Cannot start or end with a hyphen
- Examples:
my-app,analytics-db,customer-123
Long-running operations (LROs)
Create, update, and delete operations return a databricks.longrunning.Operation object that provides a completion status.
Example operation response:
{
"name": "projects/my-project/operations/abc123",
"done": false
}
Poll for completion using GetOperation:
Python SDK
from databricks.sdk import WorkspaceClient
w = WorkspaceClient()
# Start an operation
operation = w.postgres.create_project(...)
# Wait for completion
result = operation.wait()
print(f"Operation completed: {result.name}")
Java SDK
import com.databricks.sdk.WorkspaceClient;
import com.databricks.sdk.service.postgres.*;
WorkspaceClient w = new WorkspaceClient();
// Start an operation
CreateProjectOperation operation = w.postgres().createProject(...);
// Wait for completion
Project result = operation.waitForCompletion();
System.out.println("Operation completed: " + result.getName());
CLI
The CLI automatically waits for operations to complete by default. Use --no-wait to return immediately:
databricks postgres create-project --no-wait ...
curl
# Poll the operation
curl "$WORKSPACE/api/2.0/postgres/projects/my-project/operations/abc123" \
-H "Authorization: Bearer ${DATABRICKS_TOKEN}" | jq '.done'
Poll every few seconds until done is true.
Update masks
Update operations require an update_mask parameter specifying which fields to modify. This prevents accidentally overwriting unrelated fields.
Format differences:
| Method | Format | Example |
|---|---|---|
| REST API | Query parameter | ?update_mask=spec.display_name |
| Python SDK | FieldMask object | update_mask=FieldMask(field_mask=["spec.display_name"]) |
| CLI | Positional argument | update-project NAME spec.display_name |
Error handling
The Lakebase API returns standard HTTP status codes.
409: Conflicting operations
Error message:
project already has running conflicting operations, scheduling of new ones is prohibited
What it means:
Lakebase sometimes schedules internal maintenance operations on projects. If a client request arrives while one of these internal operations is in progress, Lakebase can reject the new request with a 409 Conflict error.
This is expected behavior. Clients should be prepared to retry requests when this error occurs.
What to do:
Retry the request. When the internal operation completes, Lakebase accepts new requests for the project.
Use exponential backoff for retries: wait a short interval before the first retry, then double the wait on each subsequent attempt. A starting interval of 100 milliseconds with a maximum of 30 seconds is a reasonable default.
Python SDK
import time
from databricks.sdk import WorkspaceClient
from databricks.sdk.errors import ResourceConflict
from databricks.sdk.service.postgres import Branch, BranchSpec
w = WorkspaceClient()
def retry_on_conflict(fn, max_attempts=5, base_delay=0.1):
"""Retry a Lakebase API call when a conflicting operation is in progress."""
for attempt in range(max_attempts):
try:
return fn()
except ResourceConflict:
if attempt == max_attempts - 1:
raise
wait = base_delay * (2 ** attempt)
print(f"Conflicting operation in progress. Retrying in {wait}s...")
time.sleep(wait)
# Example: create a branch with retry
branch = retry_on_conflict(
lambda: w.postgres.create_branch(
parent="projects/my-project",
branch=Branch(spec=BranchSpec(no_expiry=True)),
branch_id="my-branch",
).wait()
)
curl
# Retry with exponential backoff on 409 responses
retry_on_conflict() {
local cmd=("$@")
local max_attempts=5
local delay=0.1
local attempt=0
while [ $attempt -lt $max_attempts ]; do
response=$(curl -s -w "\n%{http_code}" "${cmd[@]}")
http_code=$(echo "$response" | tail -n1)
body=$(echo "$response" | sed '$d')
if [ "$http_code" -ne 409 ]; then
echo "$body"
return 0
fi
attempt=$((attempt + 1))
if [ $attempt -eq $max_attempts ]; then
echo "Max retries reached. Last response: $body" >&2
return 1
fi
echo "Conflicting operation in progress. Retrying in ${delay}s..." >&2
sleep "$delay"
delay=$((delay * 2))
done
}
# Example: create a branch with retry
retry_on_conflict \
-X POST "$WORKSPACE/api/2.0/postgres/projects/my-project/branches?branch_id=my-branch" \
-H "Authorization: Bearer ${DATABRICKS_TOKEN}" \
-H "Content-Type: application/json" \
-d '{"spec": {"no_expiry": true}}'
Note
A 409 Conflict on a Lakebase API request means the request was not accepted, not that it was applied. Always verify the resource state after a successful retry by calling the corresponding GET endpoint.