Azure Speech Service with MAS AEC aec_v1.fpie not found error

Ken Chu 0 Reputation points
2025-04-12T11:09:33.7133333+00:00

Hi,
I am developing a program to use azure speech to text with microsoft audio stack's AEC and microphone geometry setting. However I got error with the AEC that the file aec_v1.fpie not found. I have installed both speech and mas package and tried version 1.43.0, 1.42.0, 1.41.1. Moreover, I have installed/reinstalled/cleared nuget cache but still no luck. So I have spent a few weeks trying different ways to solve this. Hope you can help me with this.

dotnet add package Microsoft.CognitiveServices.Speech --version 1.43.0

dotnet add package Microsoft.CognitiveServices.Speech.Extension.MAS --version 1.43.0

Following is the full error and source code:

Initializing program...

Creating speech config...

Creating audio processing options...

Configuring audio input...

Creating speech recognizer...

Starting recognition... Press Enter to stop.

Continuous recognition started successfully

Press Enter to stop recognition...

rfail (line 42 of C:\__w\1\s\src\unimic_runtime\apps\CAECV0FPIEFilter.h): Model file (C:\\Users\\User\\Desktop\\Development\\microsoft-audio-stack-test\\AudioStackTest\\bin\\Debug\\net9.0\\runtimes\\win-x64\\native\\MASmodels\\aec_v1.fpie) not foundSession started

CANCELED: Reason=Error

CANCELED: ErrorCode=RuntimeError

CANCELED: ErrorDetails=Exception with an error code: 0x1b (SPXERR_RUNTIME_ERROR) SessionId: f17b5d1b092441bd9517688951aaf812

Session stopped

using System;
using System.IO;
using System.Threading.Tasks;
using Microsoft.CognitiveServices.Speech;
using Microsoft.CognitiveServices.Speech.Audio;
using NAudio.Wave;

class Program
{
    static async Task Main(string[] args)
    {
        try
        {
            Console.WriteLine("Initializing program...");
            
            // Replace with your own subscription key and region
            var subscriptionKey = "";
            var serviceRegion = "";
            
            Console.WriteLine("Creating speech config...");
            var speechConfig = SpeechConfig.FromSubscription(subscriptionKey, serviceRegion);
            
            Console.WriteLine("Creating audio processing options...");
            var audioProcessingOptions = AudioProcessingOptions.Create(AudioProcessingConstants.AUDIO_INPUT_PROCESSING_ENABLE_V2);
            
            Console.WriteLine("Configuring audio input...");
            var audioInput = AudioConfig.FromDefaultMicrophoneInput(audioProcessingOptions);

            Console.WriteLine("Creating speech recognizer...");
            using var recognizer = new SpeechRecognizer(speechConfig, audioInput);

            // Set up event handlers
            recognizer.Recognized += (s, e) =>
            {
                if (e.Result.Reason == ResultReason.RecognizedSpeech)
                {
                    Console.WriteLine($"RECOGNIZED: Text={e.Result.Text}");
                }
            };

            recognizer.Canceled += (s, e) =>
            {
                Console.WriteLine($"CANCELED: Reason={e.Reason}");

                if (e.Reason == CancellationReason.Error)
                {
                    Console.WriteLine($"CANCELED: ErrorCode={e.ErrorCode}");
                    Console.WriteLine($"CANCELED: ErrorDetails={e.ErrorDetails}");
                }
            };

            recognizer.SessionStarted += (s, e) =>
            {
                Console.WriteLine("Session started");
            };

            recognizer.SessionStopped += (s, e) =>
            {
                Console.WriteLine("Session stopped");
            };

            Console.WriteLine("Starting recognition... Press Enter to stop.");
            
            // Start continuous recognition
            await recognizer.StartContinuousRecognitionAsync();
            Console.WriteLine("Continuous recognition started successfully");

            // Wait for Enter key press
            Console.WriteLine("Press Enter to stop recognition...");
            while (true)
            {
                if (Console.KeyAvailable)
                {
                    var key = Console.ReadKey(true);
                    if (key.Key == ConsoleKey.Enter)
                    {
                        break;
                    }
                }
                await Task.Delay(1000); // Small delay to prevent high CPU usage
            }

            // Stop recognition
            await recognizer.StopContinuousRecognitionAsync();
            Console.WriteLine("Recognition stopped");
        }
        catch (Exception ex)
        {
            Console.WriteLine($"Error occurred: {ex.Message}");
            Console.WriteLine($"Error type: {ex.GetType().FullName}");
            Console.WriteLine($"Stack trace: {ex.StackTrace}");
            
            if (ex.InnerException != null)
            {
                Console.WriteLine($"Inner exception: {ex.InnerException.Message}");
                Console.WriteLine($"Inner exception type: {ex.InnerException.GetType().FullName}");
            }
        }
    }
}

Azure AI Speech
Azure AI Speech
An Azure service that integrates speech processing into apps and services.
1,983 questions
{count} votes

1 answer

Sort by: Most helpful
  1. Manas Mohanty 3,125 Reputation points Microsoft External Staff
    2025-04-16T15:59:49.6066667+00:00

    Hi Ken Chu

    As per this section , it mentions that

    Microsoft Audio Stack requires the reference channel (also known as loopback channel) to perform echo cancellation. The source of the reference channel varies by platform:

    Windows - The reference channel is automatically gathered by the Speech SDK if the SpeakerReferenceChannel::LastChannel option is provided when creating AudioProcessingOptions.

    Linux - ALSA (Advanced Linux Sound Architecture) must be configured to provide the reference audio stream as the last channel for the audio input device used. ALSA is configured in addition to providing the SpeakerReferenceChannel::LastChannel option when creating AudioProcessingOptions.

    I am able to use below audio processing option which is mentioned here

     var audioProcessingOptions = AudioProcessingOptions.Create(
         
                    AudioProcessingConstants.AUDIO_INPUT_PROCESSING_ENABLE_DEFAULT,
                    PresetMicrophoneArrayGeometry.Linear2,
                    SpeakerReferenceChannel.LastChannel);
                                  
    Console.WriteLine("Configuring audio input...");
    var audioInput = AudioConfig.FromDefaultMicrophoneInput(audioProcessingOptions);
    
    

    Would it help address your requirement.

    Thank you.

    0 comments No comments

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.