Share via

Missing telemetry for HTTP external vendor dependency calls in Central US App-Insights instance

Arundhati Kumbhar 0 Reputation points
2026-03-17T17:16:00.2033333+00:00

I did observe missing telemetry for HTTP external vendor dependency calls in Central US App-Insights instance for our application. Though it is working fine for East US instances. The resources in Central US are recently configured and application was migrated from on-prem (Azure App-Insights in East US) to Azure (resources and Azure App-Insights in East US). Did anyone face similar issue?

Azure Monitor
Azure Monitor

An Azure service that is used to collect, analyze, and act on telemetry data from Azure and on-premises environments.


2 answers

Sort by: Most helpful
  1. Suchitra Suregaunkar 12,015 Reputation points Microsoft External Staff Moderator
    2026-03-17T19:46:02.6333333+00:00

    Hello**Arundhati Kumbhar** it sounds like your app in Central US is sending HTTP dependency calls but they never show up in your Central US Application Insights resource, while the same code in East US works fine. Here are some things to check:

    1. Run the built-in diagnostics: • In the Azure portal, open your Central US AI resource, go to “Diagnose and solve problems” → “Issue tracking additional telemetry” and make sure neither the ApplicationInsightsUnsupportedSDKDiagnostic nor the ApplicationInsightsMissingDataDiagnostic flags any errors.
    2. Verify your instrumentation key or connection string : • Make sure your app is actually pointing at the Central US AI key/connection string and not still reporting to the old East US resource. • If you’ve migrated to connection strings, confirm you’re not mixing IKs and CSs in your config.
    3. Check your SDK version and dependency-tracking support : • Ensure you’re on a supported AI SDK version for your platform. Older versions sometimes drop HTTP calls. • If you’re on Node.js, verify your HTTP client library is one of the “automatically tracked modules” (see docs). If not, you may need to call client.trackDependency() manually. • On .NET/.NET Core, confirm auto-instrumentation hasn’t been disabled by an XDT setting or by interop.
    4. Look at sampling and filtering: • If you’ve added any custom TelemetryProcessor/TelemetryInitializer, or you have sampling overrides, make sure you’re not inadvertently dropping dependency telemetry.
    5. Network and private-link considerations : • If your Central US AI uses a private endpoint (Azure Monitor Private Link), check that your app can actually reach that endpoint (or that public ingestion is still enabled).
    6. Auto-instrumentation in App Service (if applicable) : • Go to https://.scm.azurewebsites.net/ApplicationInsights and confirm the extension status (Windows: agent version ~2; Linux: ~3) and that SDKPresent=false and AgentInitializedSuccessfully=true.

    If none of those steps surface the issue, we’ll need a bit more info to dig deeper:

    • What language/SDK and version are you using?

    • Are you using manual code instrumentation, the Azure App Service site extension, or the AI Agent/auto-instrumentation?

    • Do you see any error logs or dropped-item counters in your app’s console or log files?

    • Are you using sampling or any custom telemetry processors?

    References :

    Issue tracking additional telemetry (dependencies, exceptions, etc.) https://docs.microsoft.com/azure/azure-monitor/app/missing-telemetry

    • Dependency tracking in Application Insights https://docs.microsoft.com/azure/azure-monitor/app/dependencies

    • Troubleshoot missing specific telemetry types https://docs.microsoft.com/azure/azure-monitor/app/troubleshoot-missing-telemetry

    • Troubleshoot Application Insights integration with Azure App Service https://docs.microsoft.com/troubleshoot/azure/azure-monitor/app-insights/telemetry/troubleshoot-app-service-issues

    Hope this helps—let me know the answers to the questions above if you’re still stuck!

    Thanks,
    Suchitra.

    0 comments No comments

  2. Q&A Assist
    2026-03-17T17:16:17.9+00:00
    AI generated content. This question contains an answer created with AI from Q&A Assist. Learn more

    Investigate the Central US environment end to end, focusing on configuration differences and connectivity to the ingestion endpoint.

    1. Verify Application Insights configuration in Central US
      • Confirm the app is actually sending telemetry to the Central US resource (correct APPLICATIONINSIGHTS_CONNECTION_STRING / instrumentation key for the Central US instance).
      • If using codeless/App Service autoinstrumentation, ensure ApplicationInsightsAgent_EXTENSION_VERSION is set correctly for the stack and OS (for example, ~2 on Windows, ~3 on Linux for supported languages) and that the extension is running. See the language-specific checks under “Missing telemetry” for ASP.NET, Java, Node.js, and Python.
      • For ASP.NET and Windows App Service, browse to https://<yoursitename>.scm.azurewebsites.net/ApplicationInsights and confirm:
        • “Application Insights Extension Status” shows the extension as running.
        • A status source JSON file exists under D:\home\LogFiles\ApplicationInsights\status\....
        • IKeyExists is true.
    2. Check for SDK/agent conflicts that can suppress autocollected dependencies
      • For ASP.NET: ensure there are no AppAlreadyInstrumented, AppContainsDiagnosticSourceAssembly, or AppContainsAspNetTelemetryCorrelationAssembly entries in the status JSON. If present, remove Microsoft.ApplicationInsights, System.Diagnostics.DiagnosticSource, and Microsoft.AspNet.TelemetryCorrelation packages so the site extension can instrument correctly.
      • For Node.js and Python on Linux, inspect /var/log/applicationinsights/status*.json and confirm:
        • AgentInitializedSuccessfully is true.
        • SDKPresent is false (for Node.js). If SDKPresent is true, the agent backs off and may not collect dependency telemetry.
    3. Test connectivity from Central US resources to the ingestion service
      • Application Insights SDKs and agents send telemetry as REST calls to ingestion endpoints. From the Central US app host, use curl or PowerShell to call the ingestion endpoint for the Central US resource (from the connection string) and confirm there is no firewall, proxy, or NSG blocking outbound traffic.
      • If a firewall is in place, ensure outbound access is allowed to the appropriate Application Insights ingestion URLs for the region.
    4. Check for OpenTelemetry or exporter issues (if using OpenTelemetry)
      • For .NET or Java with OpenTelemetry exporters, enable diagnostic logging (OpenTelemetry-AzureMonitor-Exporter for .NET, built-in diagnostics for Java) and look for export or connectivity errors.
      • Then test connectivity as above using cURL/PowerShell to the ingestion endpoint.
    5. Validate that only specific telemetry is missing
      • Since only HTTP external vendor dependencies are missing, confirm dependency collection is enabled in the SDK/agent configuration in Central US and that no suppression rules (for example, OpenTelemetry suppression or language-specific filters) are configured differently from East US.

    If East US and Central US instances are configured identically and connectivity is confirmed, compare the status logs (status.json / applicationinsights-extension.log) between regions to identify any initialization or key/connection-string issues specific to Central US.


    References:

Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.