Hi,
I am developing a program to use azure speech to text with microsoft audio stack's AEC and microphone geometry setting. However I got error with the AEC that the file aec_v1.fpie not found. I have installed both speech and mas package and tried version 1.43.0, 1.42.0, 1.41.1. Moreover, I have installed/reinstalled/cleared nuget cache but still no luck. So I have spent a few weeks trying different ways to solve this. Hope you can help me with this.
dotnet add package Microsoft.CognitiveServices.Speech --version 1.43.0
dotnet add package Microsoft.CognitiveServices.Speech.Extension.MAS --version 1.43.0
Following is the full error and source code:
Initializing program...
Creating speech config...
Creating audio processing options...
Configuring audio input...
Creating speech recognizer...
Starting recognition... Press Enter to stop.
Continuous recognition started successfully
Press Enter to stop recognition...
rfail (line 42 of C:\__w\1\s\src\unimic_runtime\apps\CAECV0FPIEFilter.h): Model file (C:\\Users\\User\\Desktop\\Development\\microsoft-audio-stack-test\\AudioStackTest\\bin\\Debug\\net9.0\\runtimes\\win-x64\\native\\MASmodels\\aec_v1.fpie) not foundSession started
CANCELED: Reason=Error
CANCELED: ErrorCode=RuntimeError
CANCELED: ErrorDetails=Exception with an error code: 0x1b (SPXERR_RUNTIME_ERROR) SessionId: f17b5d1b092441bd9517688951aaf812
Session stopped
using System;
using System.IO;
using System.Threading.Tasks;
using Microsoft.CognitiveServices.Speech;
using Microsoft.CognitiveServices.Speech.Audio;
using NAudio.Wave;
class Program
{
static async Task Main(string[] args)
{
try
{
Console.WriteLine("Initializing program...");
// Replace with your own subscription key and region
var subscriptionKey = "";
var serviceRegion = "";
Console.WriteLine("Creating speech config...");
var speechConfig = SpeechConfig.FromSubscription(subscriptionKey, serviceRegion);
Console.WriteLine("Creating audio processing options...");
var audioProcessingOptions = AudioProcessingOptions.Create(AudioProcessingConstants.AUDIO_INPUT_PROCESSING_ENABLE_V2);
Console.WriteLine("Configuring audio input...");
var audioInput = AudioConfig.FromDefaultMicrophoneInput(audioProcessingOptions);
Console.WriteLine("Creating speech recognizer...");
using var recognizer = new SpeechRecognizer(speechConfig, audioInput);
// Set up event handlers
recognizer.Recognized += (s, e) =>
{
if (e.Result.Reason == ResultReason.RecognizedSpeech)
{
Console.WriteLine($"RECOGNIZED: Text={e.Result.Text}");
}
};
recognizer.Canceled += (s, e) =>
{
Console.WriteLine($"CANCELED: Reason={e.Reason}");
if (e.Reason == CancellationReason.Error)
{
Console.WriteLine($"CANCELED: ErrorCode={e.ErrorCode}");
Console.WriteLine($"CANCELED: ErrorDetails={e.ErrorDetails}");
}
};
recognizer.SessionStarted += (s, e) =>
{
Console.WriteLine("Session started");
};
recognizer.SessionStopped += (s, e) =>
{
Console.WriteLine("Session stopped");
};
Console.WriteLine("Starting recognition... Press Enter to stop.");
// Start continuous recognition
await recognizer.StartContinuousRecognitionAsync();
Console.WriteLine("Continuous recognition started successfully");
// Wait for Enter key press
Console.WriteLine("Press Enter to stop recognition...");
while (true)
{
if (Console.KeyAvailable)
{
var key = Console.ReadKey(true);
if (key.Key == ConsoleKey.Enter)
{
break;
}
}
await Task.Delay(1000); // Small delay to prevent high CPU usage
}
// Stop recognition
await recognizer.StopContinuousRecognitionAsync();
Console.WriteLine("Recognition stopped");
}
catch (Exception ex)
{
Console.WriteLine($"Error occurred: {ex.Message}");
Console.WriteLine($"Error type: {ex.GetType().FullName}");
Console.WriteLine($"Stack trace: {ex.StackTrace}");
if (ex.InnerException != null)
{
Console.WriteLine($"Inner exception: {ex.InnerException.Message}");
Console.WriteLine($"Inner exception type: {ex.InnerException.GetType().FullName}");
}
}
}
}