Share via


Lakebase Autoscaling API guide

Important

Lakebase Autoscaling is in Beta in the following regions: eastus2, westeurope, westus.

Lakebase Autoscaling is the latest version of Lakebase with autoscaling compute, scale-to-zero, branching, and instant restore. For feature comparison with Lakebase Provisioned, see choosing between versions.

This page provides an overview of the Lakebase Autoscaling API, including authentication, available endpoints, and common patterns for working with the REST API, Databricks CLI, Databricks SDKs (Python, Java, Go), and Terraform.

For the complete API reference, see the Postgres API documentation.

Important

The Lakebase Postgres API is in Beta. API endpoints, parameters, and behaviors are subject to change.

Authentication

The Lakebase Autoscaling API uses workspace-level OAuth authentication for managing project infrastructure (creating projects, configuring settings, etc.).

Note

Two types of connectivity: This API is for platform management (creating projects, branches, computes). For database access (connecting to query data):

  • SQL clients (psql, pgAdmin, DBeaver): Use LakeBase OAuth tokens or Postgres passwords. See Authentication.
  • Data API (RESTful HTTP): Use LakeBase OAuth tokens. See Data API.
  • Programming language drivers (psycopg, SQLAlchemy, JDBC): Use LakeBase OAuth tokens or Postgres passwords. See Quickstart.

For a complete explanation of these two authentication layers, see Authentication architecture.

Set up authentication

Authenticate using the Databricks CLI:

databricks auth login --host https://your-workspace.cloud.databricks.com

Follow the browser prompts to log in. The CLI caches your OAuth token at ~/.databricks/token-cache.json.

Then choose your access method:

Python SDK

The SDK uses unified authentication and automatically handles OAuth tokens:

from databricks.sdk import WorkspaceClient

w = WorkspaceClient()

Java SDK

The SDK uses unified authentication and automatically handles OAuth tokens:

import com.databricks.sdk.WorkspaceClient;

WorkspaceClient w = new WorkspaceClient();

CLI

Commands automatically use the cached token:

databricks postgres list-projects

curl

Generate a token for direct API calls:

export DATABRICKS_TOKEN=$(databricks auth token | jq -r .access_token)

curl -X GET "https://your-workspace.cloud.databricks.com/api/2.0/postgres/projects" \
  -H "Authorization: Bearer ${DATABRICKS_TOKEN}"

OAuth tokens expire after one hour. Regenerate as needed.

For more details, see Authorize user access to Databricks with OAuth.

Available endpoints (Beta)

All endpoints use the base path /api/2.0/postgres/.

Projects

Operation Method Endpoint Documentation
Create project POST /projects Create a project
Update project PATCH /projects/{project_id} General settings
Delete project DELETE /projects/{project_id} Delete a project
Get project GET /projects/{project_id} Get project details
List projects GET /projects List projects

Branches

Operation Method Endpoint Documentation
Create branch POST /projects/{project_id}/branches Create a branch
Update branch PATCH /projects/{project_id}/branches/{branch_id} Update branch settings
Delete branch DELETE /projects/{project_id}/branches/{branch_id} Delete a branch
Get branch GET /projects/{project_id}/branches/{branch_id} View branches
List branches GET /projects/{project_id}/branches List branches

Endpoints (Computes and Read Replicas)

Operation Method Endpoint Documentation
Create endpoint POST /projects/{project_id}/branches/{branch_id}/endpoints Create a compute / Create a read replica
Update endpoint PATCH /projects/{project_id}/branches/{branch_id}/endpoints/{endpoint_id} Edit a compute / Edit a read replica
Delete endpoint DELETE /projects/{project_id}/branches/{branch_id}/endpoints/{endpoint_id} Delete a compute / Delete a read replica
Get endpoint GET /projects/{project_id}/branches/{branch_id}/endpoints/{endpoint_id} View computes
List endpoints GET /projects/{project_id}/branches/{branch_id}/endpoints View computes

Database Credentials

Operation Method Endpoint Documentation
Generate database credential POST /credentials OAuth token authentication

Operations

Operation Method Endpoint Documentation
Get operation GET /projects/{project_id}/operations/{operation_id} See example below

Get operation

Check the status of a long-running operation by its resource name.

Python SDK

from databricks.sdk import WorkspaceClient

w = WorkspaceClient()

# Start an operation (example: create project)
operation = w.postgres.create_project(...)
print(f"Operation started: {operation.name}")

# Wait for completion
result = operation.wait()
print(f"Operation completed: {result.name}")

Java SDK

import com.databricks.sdk.WorkspaceClient;
import com.databricks.sdk.service.postgres.*;

WorkspaceClient w = new WorkspaceClient();

// Start an operation (example: create project)
CreateProjectOperation operation = w.postgres().createProject(...);
System.out.println("Operation started: " + operation.getName());

// Wait for completion
Project result = operation.waitForCompletion();
System.out.println("Operation completed: " + result.getName());

CLI

The CLI automatically waits for operations to complete by default. Use --no-wait to skip polling:

# Create project without waiting
databricks postgres create-project --no-wait ...

# Later, check the operation status
databricks postgres get-operation projects/my-project/operations/abc123

curl

# Get operation status
curl -X GET "$WORKSPACE/api/2.0/postgres/projects/my-project/operations/abc123" \
  -H "Authorization: Bearer ${DATABRICKS_TOKEN}" | jq

Response format:

{
  "name": "projects/my-project/operations/abc123",
  "done": true,
  "response": {
    "@type": "type.googleapis.com/databricks.postgres.v1.Project",
    "name": "projects/my-project",
    ...
  }
}

Fields:

  • done: false while in progress, true when complete
  • response: Contains the result when done is true
  • error: Contains error details if operation failed

Common patterns

Resource naming

Resources follow a hierarchical naming pattern where child resources are scoped to their parent.

Projects use this format:

projects/{project_id}

Child resources like operations are nested under their parent project:

projects/{project_id}/operations/{operation_id}

This means you need the parent project ID to access operations or other child resources.

Resource IDs:

When creating resources, you must provide a resource ID (like my-app) for the project_id, branch_id, or endpoint_id parameter. This ID becomes part of the resource path in API calls (such as projects/my-app/branches/development).

You can optionally provide a display_name to give your resource a more descriptive label. If you don't specify a display name, the system uses your resource ID as the display name.

:::tip Finding resources in the UI

To locate a project in the Lakebase UI, look for its display name in the projects list. If you didn't provide a custom display name when creating the project, search for your project_id (such as "my-app").

:::

Note

Resource IDs cannot be changed after creation.

Requirements:

  • Must be 1-63 characters long
  • Lowercase letters, digits, and hyphens only
  • Cannot start or end with a hyphen
  • Examples: my-app, analytics-db, customer-123

Long-running operations (LROs)

Create, update, and delete operations return a databricks.longrunning.Operation object that provides a completion status.

Example operation response:

{
  "name": "projects/my-project/operations/abc123",
  "done": false
}

Poll for completion using GetOperation:

Python SDK

from databricks.sdk import WorkspaceClient

w = WorkspaceClient()

# Start an operation
operation = w.postgres.create_project(...)

# Wait for completion
result = operation.wait()
print(f"Operation completed: {result.name}")

Java SDK

import com.databricks.sdk.WorkspaceClient;
import com.databricks.sdk.service.postgres.*;

WorkspaceClient w = new WorkspaceClient();

// Start an operation
CreateProjectOperation operation = w.postgres().createProject(...);

// Wait for completion
Project result = operation.waitForCompletion();
System.out.println("Operation completed: " + result.getName());

CLI

The CLI automatically waits for operations to complete by default. Use --no-wait to return immediately:

databricks postgres create-project --no-wait ...

curl

# Poll the operation
curl "$WORKSPACE/api/2.0/postgres/projects/my-project/operations/abc123" \
  -H "Authorization: Bearer ${DATABRICKS_TOKEN}" | jq '.done'

Poll every few seconds until done is true.

Update masks

Update operations require an update_mask parameter specifying which fields to modify. This prevents accidentally overwriting unrelated fields.

Format differences:

Method Format Example
REST API Query parameter ?update_mask=spec.display_name
Python SDK FieldMask object update_mask=FieldMask(field_mask=["spec.display_name"])
CLI Positional argument update-project NAME spec.display_name

Additional resources

SDKs and infrastructure-as-code