Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
This article describes how to choose and configure a Python environment for Serverless GPU Compute, including environment caching behavior, custom module imports, and known limitations.
What environment to use
Serverless GPU compute offers two managed Python environments:
Note
Workspace base environments are not supported for Serverless GPU Compute. Instead, use the default or AI environment, and specify additional dependencies directly in the Environments side panel or pip install them.
Default base environment (Barebones)
A minimal, stable environment containing only the required packages for Serverless GPU Compute operation. The environment includes torch, cuda, and torchvision, optimized for compatibility. This allows Databricks to upgrade the server independently, delivering performance improvements, security enhancements, and bug fixes without requiring any code changes to workloads.
Best for: Users who want full control over their dependency stack and prefer to install only what they need.
This is the default environment when you connect to Serverless GPU Compute.
For more details about package versions installed in different versions, see the release notes:
Databricks AI environment
Available in serverless GPU environment 4 and later. The AI environment is built on top of the default base environment with common runtime packages and packages specific to machine learning on GPUs. Pre-installed packages include:
- PyTorch (with CUDA support)
- Transformers (Hugging Face)
- LangChain
- XGBoost
- And additional ML/DL dependencies
Best for: ML practitioners who want a complete environment for training workloads, fine-tuning, and experimentation without manual dependency management.
To select: In the Environment side panel, choose AI v4 as your base environment.
For more details about package versions installed in different versions, see the release notes:
Workspace base environments
Workspace base environments are not supported for Serverless GPU Compute. You cannot use custom workspace-level environment configurations.
To configure your deep learning environment for a project, use one of the two provided base environments (default or Databricks AI) and install additional packages programmatically using %pip install within your notebook or at the top of your training script:
%pip install datasets accelerate peft bitsandbytes
You can install additional libraries to the Serverless GPU Compute environment. See Add dependencies to the notebook.
Behavior
When are environments cached?
Environments are cached across sessions to speed up startup times. When you reconnect to Serverless GPU Compute with the same environment configuration, previously installed packages may be available from cache, reducing setup time.
However, cache behavior is not guaranteed — always ensure your notebook includes the necessary %pip install commands for reproducibility.
How do I import custom modules?
You can import custom modules by placing them in /Workspace/Shared and adding the path to sys.path:
import sys
sys.path.append("/Workspace/Shared/my-project/src")
from my_module import my_function
You can also upload module files as Workspace files and import them directly. For multi-user collaboration, store shared code in /Workspace/Shared rather than user-specific folders. For active development, use user-specific folders and push to a remote Git repository for version control.
Limitations
The following capabilities are not available on Serverless GPU Compute:
- Spark functions — You cannot import or use PySpark functions directly. Serverless GPU compute is a Python-only environment; Spark is not available as a local runtime. However, Spark Connect is available for data loading. See Load data on Serverless GPU compute.
- Databricks Runtime ML libraries — Pre-installed packages are not a replacement for Databricks Runtime ML. Some ML libraries available in Databricks Runtime ML may not be pre-installed on Serverless GPU Compute.
- Workspace base environments — Custom workspace-level environment configurations are not supported.
- PrivateLink-dependent packages —
pip installfrom repositories behind PrivateLink will fail.