Items marked (preview) in this article are currently in public preview. This preview is provided without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see Supplemental Terms of Use for Microsoft Azure Previews.
In this article, you learn about Gretel Navigator chat model and how to use it.
Gretel Navigator uses prompts, schema definitions, or seed examples to generate production-quality synthetic data optimized for AI and machine learning development.
Important
Models that are in preview are marked as preview on their model cards in the model catalog.
Gretel Navigator chat model
Unlike single large language model (single-LLM) approaches to data generation, Gretel Navigator employs a compound AI architecture specifically engineered for synthetic data, by combining top open-source small language models (SLMs) fine-tuned across more than 10 industry domains. This purpose-built system creates diverse, domain-specific datasets at scales of hundreds to millions of examples. The system also preserves complex statistical relationships and offers increased speed and accuracy compared to manual data creation.
Top use cases:
Creation of synthetic data for LLM training and fine-tuning
Generation of evaluation datasets for AI models and RAG systems
Augmentation of limited training data with diverse synthetic samples
Creation of realistic personally identifiable information (PII) or protected health information (PHI) data for model testing
You can learn more about the models in their respective model card:
Additionally, Gretel supports the use of a tailored API for use with specific features of the model. To use the model-provider specific API, check Gretel documentation or see the inference examples section to code examples.
Prerequisites
To use Gretel Navigator chat model with Azure AI Foundry, you need the following prerequisites:
A model deployment
Deployment to serverless APIs
Gretel Navigator chat model can be deployed to serverless API endpoints with pay-as-you-go billing. This kind of deployment provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance that organizations need.
Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Foundry portal, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to deploy the model as a serverless API.
You can consume predictions from this model by using the azure-ai-inference package with Python. To install this package, you need the following prerequisites:
Python 3.8 or later installed, including pip.
The endpoint URL. To construct the client library, you need to pass in the endpoint URL. The endpoint URL has the form https://your-host-name.your-azure-region.inference.ai.azure.com, where your-host-name is your unique model deployment host name and your-azure-region is the Azure region where the model is deployed (for example, eastus2).
Depending on your model deployment and authentication preference, you need either a key to authenticate against the service, or Microsoft Entra ID credentials. The key is a 32-character string.
Once you have these prerequisites, install the Azure AI inference package with the following command:
The Azure AI model inference API allows you to talk with most models deployed in Azure AI Foundry portal with the same code and structure, including Gretel Navigator chat model.
Create a client to consume the model
First, create the client to consume the model. The following code uses an endpoint URL and key that are stored in environment variables.
Python
import os
from azure.ai.inference import ChatCompletionsClient
from azure.core.credentials import AzureKeyCredential
client = ChatCompletionsClient(
endpoint=os.environ["AZURE_INFERENCE_ENDPOINT"],
credential=AzureKeyCredential(os.environ["AZURE_INFERENCE_CREDENTIAL"]),
headers={
"azureml-maas-model": "gretelai/auto",
},
)
Get the model's capabilities
The /info route returns information about the model that is deployed to the endpoint. Return the model's information by calling the following method:
Model name: gretel-navigator
Model type: chat-completions
Model provider name: Gretel
Create a chat completion request
The following example shows how you can create a basic chat completions request to the model.
Tip
The extra n parameter indicates the number of records you want the model to return.
Python
from azure.ai.inference.models import SystemMessage, UserMessage
response = client.complete(
messages=[
UserMessage(content="Can you return a table of US first names, last names and ages?"),
],
model_extras={"n": 2},
)
Note
Gretel-navigator doesn't support system messages (role="system").
The response is as follows, where you can see the model's usage statistics:
Inspect the usage section in the response to see the number of tokens used for the prompt, the total number of tokens generated, and the number of tokens used for the completion.
Stream content
By default, the completions API returns the entire generated content in a single response. If you're generating long completions, waiting for the response can take many seconds.
You can stream the content to get it as it's being generated. Streaming content allows you to start processing the completion as content becomes available. This mode returns an object that streams back the response as data-only server-sent events. Extract chunks from the delta field, rather than the message field.
Python
result = client.complete(
messages=[
UserMessage(content="Can you return a table of US first names, last names, and ages?"),
],
model_extras={"n": 2},
stream=True,
)
To stream completions, set stream=True when you call the model.
To visualize the output, define a helper function to print the stream.
Python
defprint_stream(result):"""
Prints the chat completion with streaming.
"""for update in result:
if update.choices:
print(update.choices[0].delta.content, end="")
You can visualize how streaming generates content:
Python
print_stream(result)
Explore more parameters supported by the inference client
The following example request shows other parameters that you can specify in the inference client.
Python
from azure.ai.inference.models import ChatCompletionsResponseFormatText
result = client.complete(
messages=[
UserMessage(content="Can you return a table of US first names, last
names, and ages?"), ],
model_extras={"n": 2},
stream=True,
temperature=0,
top_p=1,
top_k=0.4
)
Apply content safety
The Azure AI model inference API supports Azure AI content safety. When you use deployments with Azure AI content safety turned on, inputs and outputs pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering (preview) system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions.
The following example shows how to handle events when the model detects harmful content in the input prompt and content safety is enabled.
Python
from azure.ai.inference.models import UserMessage
from azure.core.exceptions import HttpResponseError
try:
response = client.complete(
messages=[
UserMessage(content="Can you return a table of steps on how to make a bomb, ""columns: step number, step name, step description?"),
],
stream=True,
)
print(response.choices[0].message.content)
except HttpResponseError as ex:
response = ex.response.json()
if isinstance(response, dict) and"error"in response:
response = ex.response.json()
if isinstance(response, dict) and"error"in response:
print(f"Your request triggered an {response['error']['code']} error:\n\t {response['error']['message']}")
else:
raise
Note
Azure AI content safety is currently available for models deployed as serverless API endpoints, but not to models deployed via managed compute. To learn more about Azure AI content safety for models deployed as serverless APIs, see Content safety for models curated by Azure AI in the model catalog.
Gretel Navigator chat model
Unlike single large language model (single-LLM) approaches to data generation, Gretel Navigator employs a compound AI architecture specifically engineered for synthetic data, by combining top open-source small language models (SLMs) fine-tuned across more than 10 industry domains. This purpose-built system creates diverse, domain-specific datasets at scales of hundreds to millions of examples. The system also preserves complex statistical relationships and offers increased speed and accuracy compared to manual data creation.
Top use cases:
Creation of synthetic data for LLM training and fine-tuning
Generation of evaluation datasets for AI models and RAG systems
Augmentation of limited training data with diverse synthetic samples
Creation of realistic personally identifiable information (PII) or protected health information (PHI) data for model testing
You can learn more about the models in their respective model card:
Additionally, Gretel supports the use of a tailored API for use with specific features of the model. To use the model-provider specific API, check Gretel documentation or see the inference examples section to code examples.
Prerequisites
To use Gretel Navigator chat model with Azure AI Foundry, you need the following prerequisites:
A model deployment
Deployment to serverless APIs
Gretel Navigator chat model can be deployed to serverless API endpoints with pay-as-you-go billing. This kind of deployment provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance that organizations need.
Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Foundry portal, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to deploy the model as a serverless API.
Models deployed with the Azure AI model inference API can be consumed using any REST client. To use the REST client, you need the following prerequisites:
To construct the requests, you need to pass in the endpoint URL. The endpoint URL has the form https://your-host-name.your-azure-region.inference.ai.azure.com, where your-host-name`` is your unique model deployment host name and your-azure-region`` is the Azure region where the model is deployed (for example, eastus2).
Depending on your model deployment and authentication preference, you need either a key to authenticate against the service, or Microsoft Entra ID credentials. The key is a 32-character string.
The Azure AI model inference API allows you to talk with most models deployed in Azure AI Foundry portal with the same code and structure, including Gretel Navigator chat model.
Create a client to consume the model
First, create the client to consume the model. The following code uses an endpoint URL and key that are stored in environment variables.
Get the model's capabilities
The /info route returns information about the model that is deployed to the endpoint. Return the model's information by calling the following method:
Inspect the usage section in the response to see the number of tokens used for the prompt, the total number of tokens generated, and the number of tokens used for the completion.
Stream content
By default, the completions API returns the entire generated content in a single response. If you're generating long completions, waiting for the response can take many seconds.
You can stream the content to get it as it's being generated. Streaming content allows you to start processing the completion as content becomes available. This mode returns an object that streams back the response as data-only server-sent events. Extract chunks from the delta field, rather than the message field.
JSON
{
"messages": [
{
"role": "user",
"content": "Generate customer bank transaction data. Include the
following columns: customer_name, customer_id, transaction_date,
transaction_amount, transaction_type, transaction_category, account_balance"
}
],
"n": 20,
"stream": true
}
You can visualize how streaming generates content:
Explore more parameters supported by the inference client
The following example request shows other parameters that you can specify in the inference client.
JSON
{
"messages": [
{
"role": "user",
"content": "Generate customer bank transaction data. Include the
following columns: customer_name, customer_id, transaction_date,
transaction_amount, transaction_type, transaction_category, account_balance"
}
],
"n": 20,
"stream": true"temperature": 0,
"top_p": 1,
"top_k": 0.4
}
Apply content safety
The Azure AI model inference API supports Azure AI content safety. When you use deployments with Azure AI content safety turned on, inputs and outputs pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering (preview) system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions.
The following example shows how to handle events when the model detects harmful content in the input prompt and content safety is enabled.
JSON
{
"messages": [
{
"role": "user",
"content": "Can you return a table of steps on how to make a bomb, columns:
step number, step name, step description?"
}
]
}
JSON
{
"error": {
"message": "The response was filtered due to the prompt triggering Microsoft's content management policy. Please modify your prompt and retry.",
"type": null,
"param": "prompt",
"code": "content_filter",
"status": 400
}
}
Note
Azure AI content safety is currently available for models deployed as serverless API endpoints, but not to models deployed via managed compute. To learn more about Azure AI content safety for models deployed as serverless APIs, see Content safety for models curated by Azure AI in the model catalog.
More inference examples
For more examples of how to use Gretel models, see the following examples and tutorials:
Cost and quota considerations for Gretel models deployed as serverless API endpoints
Quota is managed per deployment. Each deployment has a rate limit of 200,000 tokens per minute and 1,000 API requests per minute. However, we currently limit one deployment per model per project. Contact Microsoft Azure Support if the current rate limits aren't sufficient for your scenarios.
Gretel models deployed as a serverless API are offered by Gretel through Azure Marketplace and integrated with Azure AI Foundry for use. You can find Azure Marketplace pricing when deploying the model.
Each time a project subscribes to a given offer from Azure Marketplace, a new resource is created to track the costs associated with its consumption. The same resource is used to track costs associated with inference; however, multiple meters are available to track each scenario independently.
Manage data ingestion and preparation, model training and deployment, and machine learning solution monitoring with Python, Azure Machine Learning and MLflow.