Azure OpenAI: "I'm sorry, but I cannot assist with that request." and incomplete status

Naoto Yoshioka 0 Reputation points
2025-07-22T05:46:22.8866667+00:00

Hello Azure OpenAI team,

When I pass certain short words to the prompt, it always returns "I'm sorry, but I cannot assist with that request." and the status becomes incomplete. If I slightly modify those words, this phenomenon stops occurring. I have turned off all Content Filters, but the phenomenon persists. How can I avoid this issue? I have attached samples below.

Thanks,

import OpenAI, {AzureOpenAI} from "openai";
import dotenv from 'dotenv';

dotenv.config();

if (!process.env.OPENAI_API_KEY
  || !process.env.AZURE_OPENAI_ENDPOINT
  || !process.env.AZURE_OPENAI_API_KEY
  || !process.env.AZURE_OPENAI_DEPLOYMENT) {
  console.error("Missing envs in .env file");
  process.exit(1);
}

const instructions = "";
//const prompt = "ドラ焼き";  // complete
const prompt = "どら焼き";  // incomplete

const useAzure = true;
let openai;
if (useAzure) {
  const apiVersion = "2025-03-01-preview";
  openai = new AzureOpenAI({ apiVersion });
} else {
  openai = new OpenAI();
}
const response = await openai.responses.create({
  //model: "gpt-4o",
  model: "gpt-4.1",
  input: [
    { role: "developer", content: instructions },
    { role: "user", content: prompt },
  ],
});
console.log(response.output_text);
console.log(response.status);

Azure OpenAI Service
Azure OpenAI Service
An Azure service that provides access to OpenAI’s GPT-3 models with enterprise capabilities.
{count} votes

1 answer

Sort by: Most helpful
  1. Nikhil Jha (Accenture International Limited) 4,320 Reputation points Microsoft External Staff Moderator
    2025-08-05T06:42:08.2633333+00:00

    Hello Naoto Yoshioka,

    Thank you for raising this issue and sharing detailed examples out to us on the Microsoft Q&A portal. You've identified a subtle but important behaviour that can occur when working with Azure OpenAI models, especially in chat-oriented deployments with content safety mechanisms in place.

    Even though you’ve disabled content filters, the behaviour you’re seeing (responses like "I'm sorry, but I cannot assist with that request." with status: incomplete) is likely due to internal model-level refusal behaviour, not content filtering at the Azure API level.

    When content filters are turned off at the Azure Portal or deployment settings, Azure OpenAI models may still trigger internal “safety” or “guardrail” mechanisms. These mechanisms can silently block (or refuse to process) certain short, ambiguous, or context-lacking prompts, as a part of the model’s built-in safety design.

    This is independent of the customer-configurable “content filtering” feature.

    • The refusal is typically triggered by inputs that match patterns associated with unsafe, risky, or restricted requests—even if the input seems harmless or business-relevant.
    • Sometimes, even benign foreign language words or innocuous phrases trip these guardrails if they seem potentially sensitive in the contextless text.

    For example:

    • "どら焼き" (hiragana form of "dorayaki") may trigger different token interpretation than "ドラ焼き" (katakana), causing the model to behave unexpectedly.
    • "4k comigo, 6k com dependentes" may resemble sensitive personal or financial data patterns, triggering refusal heuristics. Rafael Peixoto Lourenço This unpredictable behaviour is due to the model’s internal risk classifier interpreting certain spellings or wordings as potentially problematic, even if they're not.

    Recommended Workarounds:

    1. Modify the Prompt Slightly As you've discovered, even small changes (e.g., "ドラ焼き" vs "どら焼き") can bypass the issue. Adding context or rephrasing often helps.
    2. Add a System or Developer Message Include a system or developer role message to guide the model’s behavior. For example: { role: "developer", content: "You are a helpful assistant." } This can help the model interpret short prompts more reliably
    3. Use Longer or Structured Prompts Short, isolated phrases are more likely to be flagged. Try embedding them in a sentence or question.
    4. Enable Streaming or Retry Logic If you're building a chatbot, consider enabling stream: true and implementing retry logic for incomplete responses.
    5. Log and Report Specific Cases If certain phrases consistently fail, document them and raise a support ticket. This helps Microsoft improve prompt handling and filtering logic.

     

    Reference Links:


Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.