Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
This page describes how to send audio and video inputs to Gemini foundation models on Azure Databricks. You can provide media as a URL or as base64-encoded inline data using the Chat Completions API or the Google Gemini API.
Requirements
- See requirements to use foundation models.
- Install the appropriate package to your cluster based on the querying client option you choose.
Input methods
You can provide audio and video inputs using two methods:
- URL: Pass a publicly accessible URL to the media file. For video, YouTube URLs are also supported.
- Base64 inline data: Encode the file as a base64 string and pass it as a data URI (for example,
data:video/mp4;base64,<encoded_data>).
Chat Completions API
The chat completions API allows you to pass video and audio input. Use the video_url and audio_url content types in the messages array to pass media inputs. Each content item includes a url field that accepts either a web URL or a base64 data URI.
Video input
Python
import os
import base64
from openai import OpenAI
DATABRICKS_TOKEN = os.environ.get('DATABRICKS_TOKEN')
DATABRICKS_BASE_URL = os.environ.get('DATABRICKS_BASE_URL')
client = OpenAI(
api_key=DATABRICKS_TOKEN,
base_url=DATABRICKS_BASE_URL
)
# Encode a local video file as base64
with open("video.mp4", "rb") as f:
video_b64 = base64.standard_b64encode(f.read()).decode("utf-8")
response = client.chat.completions.create(
model="databricks-gemini-3-1-pro",
messages=[{
"role": "user",
"content": [
{"type": "text", "text": "Summarize what happens in these videos."},
{
"type": "video_url",
"video_url": {"url": "https://example.com/sample-video.mp4"}
},
{
"type": "video_url",
"video_url": {"url": f"data:video/mp4;base64,{video_b64}"}
},
]
}],
max_tokens=1024
)
print(response.choices[0].message.content)
REST API
curl \
-u token:$DATABRICKS_TOKEN \
-X POST \
-H "Content-Type: application/json" \
-d '{
"messages": [{
"role": "user",
"content": [
{"type": "text", "text": "Summarize what happens in these videos."},
{
"type": "video_url",
"video_url": {"url": "https://example.com/sample-video.mp4"}
},
{
"type": "video_url",
"video_url": {"url": "data:video/mp4;base64,<base64_encoded_data>"}
}
]
}],
"max_tokens": 1024
}' \
https://<workspace_host>.databricks.com/serving-endpoints/databricks-gemini-3-1-pro/invocations
Audio input
Python
import os
import base64
from openai import OpenAI
DATABRICKS_TOKEN = os.environ.get('DATABRICKS_TOKEN')
DATABRICKS_BASE_URL = os.environ.get('DATABRICKS_BASE_URL')
client = OpenAI(
api_key=DATABRICKS_TOKEN,
base_url=DATABRICKS_BASE_URL
)
# Encode a local audio file as base64
with open("audio.mp3", "rb") as f:
audio_b64 = base64.standard_b64encode(f.read()).decode("utf-8")
response = client.chat.completions.create(
model="databricks-gemini-3-1-pro",
messages=[{
"role": "user",
"content": [
{"type": "text", "text": "Transcribe this audio and summarize the key points."},
{
"type": "audio_url",
"audio_url": {"url": "https://example.com/sample-audio.mp3"}
},
{
"type": "audio_url",
"audio_url": {"url": f"data:audio/mp3;base64,{audio_b64}"}
},
]
}],
max_tokens=1024
)
print(response.choices[0].message.content)
REST API
curl \
-u token:$DATABRICKS_TOKEN \
-X POST \
-H "Content-Type: application/json" \
-d '{
"messages": [{
"role": "user",
"content": [
{"type": "text", "text": "Transcribe this audio and summarize the key points."},
{
"type": "audio_url",
"audio_url": {"url": "https://example.com/sample-audio.mp3"}
},
{
"type": "audio_url",
"audio_url": {"url": "data:audio/mp3;base64,<base64_encoded_data>"}
}
]
}],
"max_tokens": 1024
}' \
https://<workspace_host>.databricks.com/serving-endpoints/databricks-gemini-3-1-pro/invocations
Google Gemini API
Use the Google Gemini API to pass media as inlineData (base64-encoded) or fileData (URL reference) within the parts array.
Video input
Python
from google import genai
from google.genai import types
import base64
import os
DATABRICKS_TOKEN = os.environ.get('DATABRICKS_TOKEN')
client = genai.Client(
api_key="databricks",
http_options=types.HttpOptions(
base_url="https://example.staging.cloud.databricks.com/serving-endpoints/gemini",
headers={
"Authorization": f"Bearer {DATABRICKS_TOKEN}",
},
),
)
# Encode a local video file as base64
with open("video.mp4", "rb") as f:
video_b64 = base64.standard_b64encode(f.read()).decode("utf-8")
response = client.models.generate_content(
model="databricks-gemini-3-1-pro",
contents=[
types.Content(
role="user",
parts=[
types.Part(text="Summarize what happens in these videos."),
types.Part(
file_data=types.FileData(
mime_type="video/mp4",
file_uri="https://example.com/sample-video.mp4",
)
),
types.Part(
inline_data=types.Blob(
mime_type="video/mp4",
data=video_b64,
)
),
],
),
],
config=types.GenerateContentConfig(
max_output_tokens=1024,
),
)
print(response.text)
REST API
curl \
-u token:$DATABRICKS_TOKEN \
-X POST \
-H "Content-Type: application/json" \
-d '{
"contents": [{
"role": "user",
"parts": [
{"text": "Summarize what happens in these videos."},
{
"fileData": {
"mimeType": "video/mp4",
"fileUri": "https://example.com/sample-video.mp4"
}
},
{
"inlineData": {
"mimeType": "video/mp4",
"data": "<base64_encoded_data>"
}
}
]
}]
}' \
https://<workspace_host>.databricks.com/serving-endpoints/gemini/v1beta/models/databricks-gemini-3-1-pro:generateContent
Audio input
Python
from google import genai
from google.genai import types
import base64
import os
DATABRICKS_TOKEN = os.environ.get('DATABRICKS_TOKEN')
client = genai.Client(
api_key="databricks",
http_options=types.HttpOptions(
base_url="https://example.staging.cloud.databricks.com/serving-endpoints/gemini",
headers={
"Authorization": f"Bearer {DATABRICKS_TOKEN}",
},
),
)
# Encode a local audio file as base64
with open("audio.mp3", "rb") as f:
audio_b64 = base64.standard_b64encode(f.read()).decode("utf-8")
response = client.models.generate_content(
model="databricks-gemini-3-1-pro",
contents=[
types.Content(
role="user",
parts=[
types.Part(text="Transcribe this audio and summarize the key points."),
types.Part(
file_data=types.FileData(
mime_type="audio/mp3",
file_uri="https://example.com/sample-audio.mp3",
)
),
types.Part(
inline_data=types.Blob(
mime_type="audio/mp3",
data=audio_b64,
)
),
],
),
],
config=types.GenerateContentConfig(
max_output_tokens=1024,
),
)
print(response.text)
REST API
curl \
-u token:$DATABRICKS_TOKEN \
-X POST \
-H "Content-Type: application/json" \
-d '{
"contents": [{
"role": "user",
"parts": [
{"text": "Transcribe this audio and summarize the key points."},
{
"fileData": {
"mimeType": "audio/mp3",
"fileUri": "https://example.com/sample-audio.mp3"
}
},
{
"inlineData": {
"mimeType": "audio/mp3",
"data": "<base64_encoded_data>"
}
}
]
}]
}' \
https://<workspace_host>.databricks.com/serving-endpoints/gemini/v1beta/models/databricks-gemini-3-1-pro:generateContent
Supported models
Audio and video inputs are supported on the following Gemini Pro pay-per-token foundation models. See Databricks-hosted foundation models available in Foundation Model APIs for region availability.
databricks-gemini-3-1-prodatabricks-gemini-3-prodatabricks-gemini-2-5-pro
Limitations
- Audio and video inputs are only available on Gemini Pro pay-per-token foundation models. Provisioned throughput endpoints are not supported.
- Multiple audio or video inputs can be included in a single request, but large files increase latency and token usage.