Share via

Defender for Cloud not generating alerts for Azure OpenAI jailbreak attempts

Hardik Bansal 5 Reputation points
2026-03-16T13:01:47.11+00:00

Problem

I have enabled Microsoft Defender for Cloud - Threat Protection for AI Workloads in UAE North region on my Azure subscription. However, I am not receiving any security alerts in Defender for Cloud when sending known jailbreak or prompt injection prompts to my Azure OpenAI endpoint.

The jailbreak prompts are successfully blocked by Prompt Shields, but no corresponding alerts appear in Defender for Cloud.

According to Microsoft documentation, alerts should appear within seconds to minutes, but even after multiple days of testing, the Security Alerts tab remains empty for AI-related threats.

Configuration Steps Followed

I followed the official Microsoft documentation:

Enable Defender for Cloud on subscription https://learn.microsoft.com/en-us/azure/defender-for-cloud/connect-azure-subscription#enable-defender-for-cloud-on-your-azure-subscription

Onboard AI services to Defender for Cloud https://learn.microsoft.com/en-us/azure/defender-for-cloud/ai-onboarding

Expected Behavior

When sending jailbreak or prompt injection prompts to the Azure OpenAI endpoint, I expect security alerts such as:

Jailbreak attempt detected on Azure AI model deployment

Credential theft attempt detected on Azure AI model deployment

These alerts should appear under:

Defender for Cloud → Security Alerts

Reference: https://learn.microsoft.com/en-us/azure/defender-for-cloud/alerts-ai-workloads

Actual Behavior

Prompt Shields detects and blocks malicious prompts.

However, no AI workload-related security alerts are generated in Defender for Cloud.

Questions

Are there additional configuration steps required beyond enabling the Defender plan and onboarding AI services?

Is there a propagation delay after enabling Defender for AI workloads before alerts begin appearing?

Are AI threat protection alerts limited to specific regions, models, or deployment configurations?

Is there a recommended way to validate that the Defender for AI alert pipeline is working end-to-end?

Any insights or troubleshooting guidance would be appreciated.

Azure OpenAI Service
Azure OpenAI Service

An Azure service that provides access to OpenAI’s GPT-3 models with enterprise capabilities.


2 answers

Sort by: Most helpful
  1. Anshika Varshney 9,735 Reputation points Microsoft External Staff Moderator
    2026-03-17T16:13:44.99+00:00

    Hi Hardik Bansal,

    Thank you for reaching out on the Microsoft Q&A.

    This usually happens because Defender for Cloud only creates alerts when the right protection plans are enabled and when the activity matches a supported detection. Below are a few things you can check, based on how Defender for Cloud works.

    Please make sure the correct Defender plan is enabled on the subscription. Defender for Cloud does not generate alerts by default for all resources. Alerts come from specific Defender plans such as Defender for Servers, Defender for Storage, Defender for SQL, Defender for App Service, or Defender for AI workloads. You can confirm this by going to Microsoft Defender for Cloud in the Azure portal and checking Environment settings for your subscription. If a plan is not turned on, related alerts will not appear. https://learn.microsoft.com/azure/defender-for-cloud/alerts-overview

    Next, check whether the resource type you are testing is supported for alerts. Defender for Cloud only raises alerts for services and scenarios that have built‑in detections. If the activity is blocked or handled by another service, Defender for Cloud may not always create a visible alert. https://learn.microsoft.com/azure/defender-for-cloud/alerts-reference

    Another important point is timing. Some alerts appear almost immediately, but others take longer because Defender for Cloud analyzes logs and signals before creating an alert. For certain detections, it can take minutes or longer before the alert shows up in the Security alerts blade. Microsoft mentions this delay behavior in the alert reference documentation.

    Also, please check that you are looking in the correct place. Alerts are shown under Microsoft Defender for Cloud, then Security alerts. If filters are applied, such as time range, severity, or resource type, the alert list may look empty even when alerts exist. Clearing filters often helps. You can see how alerts are listed and filtered here

    If you are testing by generating known malicious or suspicious activity, keep in mind that not every test action results in an alert. Defender for Cloud raises alerts only when its detection logic confirms a real or likely threat. Blocked or low‑risk activity may not always create an alert, especially if it does not meet the alert confidence threshold. Microsoft describes how alerts are classified and generated here

    In short, please verify the Defender plan is enabled for that resource type, confirm the service is supported for alerts, allow some time for alert generation, and double‑check filters in the Security alert's view. These checks usually explain why alerts are not showing up even though Defender for Cloud is enabled.

    I Hope this helps. Do let me know if you have any further queries.

    Thankyou!

    0 comments No comments

  2. Q&A Assist
    2026-03-16T13:02:09.93+00:00
    AI generated content. This question contains an answer created with AI from Q&A Assist. Learn more

    Threat protection for AI services in Microsoft Defender for Cloud is generally available and provides alerts for jailbreak attempts, credential theft, corrupted AI behavior, and related threats on supported Azure AI services.

    From the documented behavior:

    1. Supported services and signals
      • AI threat protection currently supports:
        • Azure OpenAI supported models.
        • Azure AI Model Inference service supported models.
      • At this phase, only text tokens are scanned; image and audio tokens are not.
      • Threat protection for AI services works together with Azure AI Content Safety Prompt Shields and Microsoft threat intelligence to generate alerts such as:
        • Detected credential theft attempts on an Azure AI model deployment (AI.Azure_CredentialTheftAttempt).
        • A Jailbreak attempt on an Azure AI model deployment was blocked by Azure AI Content Safety Prompt Shields (AI.Azure_Jailbreak.ContentFiltering.BlockedAttempt).
        • A Jailbreak attempt on an Azure AI model deployment was detected by Azure AI Content Safety Prompt Shields (AI.Azure_Jailbreak.ContentFiltering.DetectedAttempt).
    2. Where alerts appear and how they are integrated
      • Defender for Cloud AI threat protection alerts are:
        • Surfaced in Microsoft Defender for Cloud under Security alerts.
        • Integrated into Defender XDR for correlation with other workloads and incidents. This allows security teams to centralize AI workload alerts and correlate them with other malicious activities.
      • Alerts from different sources can take different amounts of time to appear. Alerts that rely on deeper analysis (for example, network traffic) can be slower than those based on process or content analysis, but AI alerts are described as near real time.
    3. Alert types relevant to jailbreak and prompt injection
      • For Azure AI applications and model deployments, the documented AI alerts include:
        • Jailbreak attempts (blocked or detected) tied to Prompt Shields.
        • Credential theft attempts when credentials are detected in model responses.
        • Corrupted AI application/model/data directing phishing attempts via malicious URLs.
      • For AI agents (for example, Microsoft Foundry Agent service), additional alerts exist, such as:
        • Jailbreak attempts on AI agents (blocked or detected) by Prompt Shields.
        • ASCII smuggling attempts.
        • User phishing attempts on AI agents.
    4. Region and availability considerations
      • AI threat protection is listed as:
        • Release state: General availability (GA).
        • Clouds supported: Commercial clouds only (not Azure Government, 21Vianet, or connected AWS accounts).
      • The documentation does not specify per-region limitations for AI alerts beyond the cloud-type scope. Any region-specific behavior (for example, a region where AI threat protection is not yet fully rolled out) is not documented in the provided material.
    5. Configuration and validation
      • Required roles: To enable threat detection at subscription level, Owner or equivalent roles with the necessary data actions are required.
      • Once Defender for AI services is enabled and AI services are onboarded, no additional configuration steps are documented for AI alerts beyond:
        • Having supported Azure OpenAI / Azure AI Model Inference deployments.
        • Using text-based interactions that Prompt Shields can inspect.
      • The documentation for alert validation provides explicit simulation steps only for Windows VMs (using Defender for Servers and Defender for Endpoint test commands or EICAR). There is no documented, dedicated “test harness” for simulating AI threat protection alerts end-to-end in the same way.
    6. Practical implications for the described scenario
      • If Prompt Shields are blocking jailbreak attempts but no corresponding Defender for Cloud alerts appear:
        • The AI threat protection pipeline should, in principle, generate alerts like AI.Azure_Jailbreak.ContentFiltering.BlockedAttempt for supported Azure OpenAI deployments.
        • However, the documentation does not provide:
          • Any additional hidden configuration steps beyond enabling the plan and onboarding AI services.
          • Any explicit propagation delay window specific to AI threat protection beyond the general note that some alerts can take longer to appear.
          • Any documented region-specific limitations or exclusions for AI threat protection within commercial Azure regions.
        • There is no officially documented method in the provided material to force or simulate AI-specific alerts (unlike the VM alert validation script), so end-to-end validation for AI alerts is limited to exercising real scenarios (for example, jailbreak attempts, credential leakage) against supported models and then monitoring Defender for Cloud / Defender XDR.

    Given the above documentation, the behavior described (Prompt Shields blocking but no alerts visible) is not explained by any additional configuration, delay, or region limitation in the provided content. For this specific case, the next step would be to use the general guidance to review security alerts and incidents in Defender for Cloud and Defender XDR, and, if the issue persists, engage Azure support so they can verify AI threat protection telemetry and region-specific behavior for the subscription.


    References:

Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.