WebSocket upgrade failed: Authentication error (401). Please check subscription information and region name

Sanjay Santra 0 Reputation points
2025-03-29T11:15:24.3766667+00:00

I am continuously getting the following error when trying to access Azure speech services from a C# windows forms app:
WebSocket upgrade failed: Authentication error (401). Please check subscription information and region name

  1. The speech resource is S0 tier
  2. Registered an Entra ID app and gave permission to Microsoft Cognitive Services
  3. Under role assignment of speech resource, added the Entra ID app (service principal) with Cognitive Services Speech User and Cognitive Services Speech Contributor permissions
  4. Created a custom domain name and private endpoint for the speech resource What I am missing out: Here's the code: TokenRequestContext context = new Azure.Core.TokenRequestContext(new string[] { "https://cognitiveservices.azure.com/.default" }); //InteractiveBrowserCredential browserCredential = new InteractiveBrowserCredential(); var credential = new InteractiveBrowserCredential(new InteractiveBrowserCredentialOptions {
    TenantId = "",
    
    ClientId = ""
    
    }); var browserToken = credential.GetToken(context); string aadToken = browserToken.Token; // Define the custom domain endpoint for your Speech resource. var endpoint = "wss://customdomain.cognitiveservices.azure.com/stt/speech/universal/v2"; string resourceId = "/subscriptions/...."; string region = "eastus"; // You need to include the "aad#" prefix and the "#" (hash) separator between resource ID and Microsoft Entra access token. var authorizationToken = $"aad#{resourceId}#{aadToken}"; SpeechTranslationConfig translationConfig = SpeechTranslationConfig.FromAuthorizationToken(authorizationToken, region); translationConfig.SpeechRecognitionLanguage = "en-US"; translationConfig.AddTargetLanguage("fr"); // English to French // Enable Cognitive Services Logging string logFilePath = "speech-sdk-log.txt"; translationConfig.SetProperty(PropertyId.Speech_LogFilename, logFilePath); Console.WriteLine($"πŸ“„ Logging enabled. Logs will be saved to: {logFilePath}"); using var recognizer = new TranslationRecognizer(translationConfig); Console.WriteLine("Speak something..."); var result = await recognizer.RecognizeOnceAsync(); if (result.Reason == ResultReason.TranslatedSpeech) {
    Console.WriteLine($"Original: {result.Text}");
    
    foreach (var (lang, translation) in result.Translations)
    
    {
    
        Console.WriteLine($"Translated [{lang}]: {translation}");
    
    }
    
    } else if (result.Reason == ResultReason.Canceled) {
    var cancellation = CancellationDetails.FromResult(result);
    
    Console.WriteLine($"Speech translation canceled: {cancellation.Reason}");
    
    if (cancellation.Reason == CancellationReason.Error)
    
    {
    
        Console.WriteLine($"Error Code: {cancellation.ErrorCode}");
    
        Console.WriteLine($"Error Details: {cancellation.ErrorDetails}");
    
    }
    
    } else {
    Console.WriteLine($"Speech Recognition Failed: {result.Reason}");
    
    }
Azure AI Speech
Azure AI Speech
An Azure service that integrates speech processing into apps and services.
1,983 questions
{count} votes

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.