A quick run through the new Windows 7 multimedia audio SDK samples
As I mentioned yesterday, the Windows SDK is now live. For the Windows SDK, there are 9 new samples (and one changed samples).
Two of the SDK samples demonstrate the WIndows 7 “Ducking” feature – they’re actually based on the code I wrote for my PDC talk last year, but tweaked to show some new scenarios and clean up the code (the original PDC code was not ready for prime time, it was strictly demo-ware).
- The DuckingMediaPlayer application shows both a really simple DirectShow based media player but also shows how to integrate the players volume control with the Windows per-application volume mixer. So if you’re wondering how to integrate the volume control for your application with the built-in per-application volume controls, this sample will show you how to do it.
- The DuckingCaptureSample application is a trivial application that uses the wave APIs to capture audio data (and throw it away). This shows how to trigger the “ducking” feature in WIndows 7.
The other 7 samples reproduce functionality in the old WinAudio SDK sample – instead of providing a single monolithic audio sample, we crafted different samples each of which shows one aspect of audio rendering. All of these samples are simple console applications which read their parameters from the command line. They’re intentionally extremely simple to reduce the potential for confusion in the samples.
- The EndpointVolume sample shows how to use the IAudioEndpointVolume APIs. It demonstrates using the VolumeStepUp, VolumeStepDown, SetMute and SetMasterVolumeLevelScalar APIs.
- The CaptureSharedEventDriven sample shows how to use WASAPI in “event driven” mode to capture data in shared mode from the microphone.
- The CaptureSharedTimerDriven sample shows how to use WASAPI in “timer driven” mode to capture data in shared mode from the microphone.
- The RenderSharedEventDriven sample shows how to use WASAPI in “event driven” mode to play a simple sine wave in shared mode from the speakers.
- The RenderSharedTimerDriven sample shows how to use WASAPI in “timer driven” mode to play a simple sine wave in shared mode from the speakers.
- The RenderExclusiveEventDriven sample shows how to use WASAPI in “event driven” mode to play a simple sine wave in exclusive mode from the speakers. This also shows the “exclusive mode swizzle” that is required to align the buffer size with the hardware buffer size (which is required to work with many HDAudio solutions).
- The RenderExclusiveTimerDriven sample shows how to use WASAPI in “timer driven” mode to play a simple sine wave in exclusive mode from the speakers. This also shows the “exclusive mode swizzle” that is required to align the buffer size with the hardware buffer size (which is required to work with many HDAudio solutions).
The reason we don’t have capture exclusive samples is that we felt that users of the SDK could derive the exclusive capture samples from the render samples if it was important to them.
All the shared mode samples also show how to implement stream switching.
One of the things I’m quite proud about the samples is their structure. Each sample has the following basic file layout:
Directory of C:\Program Files\Microsoft SDKs\Windows\v7.0\Samples\multimedia\audio\RenderExclusiveEventDriven 08/07/2009 10:08 AM <DIR> . 08/07/2009 10:08 AM <DIR> .. 07/14/2009 06:54 PM 7,079 CmdLine.cpp 07/14/2009 06:54 PM 760 CmdLine.h 07/14/2009 06:54 PM 2,084 ReadMe.txt 07/14/2009 06:54 PM 533 stdafx.cpp 07/14/2009 06:54 PM 935 stdafx.h 07/14/2009 06:54 PM 1,067 targetver.h 07/14/2009 06:54 PM 1,925 ToneGen.h 07/14/2009 06:54 PM 18,376 WASAPIRenderer.cpp 07/14/2009 06:54 PM 2,560 WASAPIRenderer.h 07/14/2009 06:54 PM 14,754 WASAPIRenderExclusiveEventDriven.cpp 07/14/2009 06:54 PM 1,283 WASAPIRenderExclusiveEventDriven.sln 07/14/2009 06:54 PM 8,403 WASAPIRenderExclusiveEventDriven.vcproj 12 File(s) 59,759 bytes 2 Dir(s) 62,822,105,088 bytes free
Each sample has the same set of common files:
- CmdLine.cpp/CmdLine.h: A very simple command line parsing function
- stdafx.cpp/stdafx.h: Common header definitions
- targetver.h: Defines the target platform for these samples (Windows Vista in this case).
- ToneGen.h: A very simple sine wave generating function (not present for the capture samples).
- WASAPIRenderer.cpp/WASAPIRenderer.h (WASAPICapture for capture samples): The code which does all the WASAPI rendering and/or capturing
- <SampleName>: Scaffolding for the sample – this defines the command line parameters and instantiates the actual render/capture object and asks it to do the rendering/capturing.
Each of these samples is essentially identical, in fact they’re sufficiently similar that you can use your favorite file comparison tool to see what code has to change to go from one mode to another. So to see what changes are required to go from exclusive timer driven to exclusive event driven, you can windiff the RenderExclusiveEventDriven\WASAPIRenderer.cpp and RenderExclusiveTimerDriven\WASAPIRenderer.cpp files and see what changes are required to implement the different model.
Comments
Anonymous
August 10, 2009
"exclusive mode swizzle", eh? I like that even better than "alignment dance." :-)Anonymous
August 29, 2009
Why is there no "CaptureExclusiveEventDriven" sample?Anonymous
August 29, 2009
Preben: That's because you can derive CaptureExclusiveEventDriven from CaptureSharedEventDriven and RenderExclusiveEventDriven - there's nothing special about capturing that needs to be shown (and frankly I had written a LOT of samples).Anonymous
August 30, 2009
Larry, I was asking because I am debugging a Vista machine that consequently fails only with the event driven capture. It's reproduceble with the (now obsoleted?) WinAudio sample. IAudioCaptureClient::LockBuffer returns S_OK and a NULL pointer after a short while, and after that there is no way to recover :(Anonymous
August 30, 2009
Preben: Event driven capture doesn't work on Win7 before SP1 - in all honesty event driven wasn't as well tested in Vista as it should have been. With SP1 capture should work, I think. The other problem is that we discovered in Win7 that many 3rd party audio drivers don't correctly support event driven rendering or capturing. This is hideously unfortunate.Anonymous
August 30, 2009
"....doesn't work on Win7 before SP1". You mean Vista SP1, right? This issue I am seeing is on Vista SP2 with both 3rd party and stock HD Audio drivers. Windows 7 with stock drivers works perfect on the same machine. It's bad to hear that event driven capture does not work always in Windows 7 either. I thought that the WHQL checed that the 3rd party behaved well. Okay, I have some stuff to rewrite then... :) Since the tick based timers (Sleep and WaitForSingleObject) are rather sloppy (+/- 15 ms.), I do not see how it is possible to implement low latency audio without resorting to using the rather heavy MM timers. Can MMCSS control the thread that MM timers are fired from?Anonymous
August 30, 2009
Preben: Doh - Yes, I mean Vista. Event driven capture should work reliably 100% of the time on Win7. It's spotty on Vista. On Vista, if you use MMCSS, the timer interrupt is set to 1ms, so you don't need to use the timeBeginPeriod API. For Win7 mmcss doesn't reduce the timer frequency so you need to do that manually (check the render exclusive timer driven sample for details).Anonymous
September 29, 2009
I am following a sample, CaptureSharedTimerDriven, under SDK 7.0 on Win 7 RTM release, 64 bit version, here is what I have changed to the code as you have suggested to make it work in ExclusiveMode HRESULT hr = _AudioClient->Initialize(AUDCLNT_SHAREMODE_EXCLUSIVE, 0, _EngineLatencyInMS10000, _EngineLatencyInMS10000, _MixFormat, NULL); the above code returns me a very large -ve error number, Previously I was using the sample from SDK 6.1 as hr = pAudioClient->Initialize( AUDCLNT_SHAREMODE_EXCLUSIVE, 0, hnsRequestedDuration, hnsRequestedDuration, pwfxInTemp, NULL); This has been working fine on Vista but this code fails on Win 7, which had made me try the sample from SDK 7.0. Machine: Dell Gx620 OS Win 7 Ultimatum, RTM, 64 bit version Device: Logitech Pro 9000 Driver: MS UVC Driver Can you please help me out here, I would really appreciate it.Anonymous
September 29, 2009
The comment has been removedAnonymous
October 01, 2009
(posting anonymously because the spam filter hates me) > on your audio hardware the mix format isn't supported by the device. http://social.msdn.microsoft.com/Forums/en-US/windowspro-audiodevelopment/thread/4c5c3202-b11c-48e2-a0ac-7c238ec5fb59 (100 - ε)% of the time, devices won't support the mix format. The mix format is float32; the vast majority of devices are integer-only.Anonymous
October 20, 2009
Thanks for the info about the WASAPI examples. One question: Is it possible to use WASAPI for realtime streaming (in and out) and XAudio2 at the same time? I don't understand if the low level windows mixer can mix WASAPI-Out-Streams and XAudio2-Output together. Background: I want to realize a 3d-soundworld on a headset and mix this with a VoIP connection. For VoIP I need realtime IO with very low latency.Anonymous
October 20, 2009
XAudio2 just uses the audio engine so there should be no reason that they wouldn't mix.