Windows CE Audio Driver Samples
This is my first blog post, so please feel free to leave feedback with questions or comments, especially if you feel I've gotten anything wrong or if there's some critical bit of infomation missing.
Windows CE currently ships audio driver samples descended from three distict codebases: MDD/PDD, WaveDev2, and UAM. There are historical and functional reasons for this, but the existence of different driver models that all do more-or-less the same thing has caused some confusion. I'll try to clear things up a little in this posting.
First off, all three sample designs adhere to the same WaveAPI driver interface. They all hook into the system as device drivers, export WAV_Open, WAV_IOControl, WAV_Close, etc. entry points, and handle IOCTL_WAV_MESSAGE IoControl codes to interact with the waveapi subsystem. That upper-edge is hardware independent, and all the hardware dependent code goes into the driver. The difference between the samples is in their internal design.
MDD/PDD
The oldest design and the one most in use today among Windows CE embedded platforms is the MDD/PDD model. The MDD/PDD implementation splits the driver into two pieces, a "sort-of hardware independent" MDD layer, and a "really hardware dependent" PDD layer. The MDD portion is shipped as public code (in public\COMMON\oak\drivers\wavedev\mdd), and generates a library named wavemdd.lib. The PDD layer must be written (or ported from public\COMMON\oak\drivers\wavedev\pdd) by the OEM. To build a complete driver, the two layers are statically linked together. Between the MDD and PDD layers there is a functional interface defined by public\common\oak\inc\waveddsi.h.
The waveapi driver interface already does a pretty good job of distilling hardware dependencies down to the driver level, so one might wonder how MDD/PDD can further separate hardware independent/dependent layers. To do this, the MDD layer makes a couple of assumptions about the way the hardware works and what types of features you want to support.
Here are some assumptions MDD/PDD makes:
· Only one device (waveOutGetNumDevs always returns 1)
· Only one stream per device (e.g. one input and one output stream). Note that waveapi includes an internal “software mixer” which can virtualize the single output stream into multiple streams at the application level.
· Input and output DMA share the same interrupt.
By making these assumptions, the MDD/PDD model greatly simplifies the PDD layer, and the MDD/PDD driver is relatively easy to port if you have fairly generic audio hardware and you have fairly generic needs. However, if your hardware is nonstandard, or if you need to implement some special handling, you may find yourself itching to modify the MDD source code. At that point you may be fighting against the MDD/PDD interface design and creating more complexity than needed.
Wavedev2
At the start of the Smartphone project in 2000 we had a number of audio requirements which we found the MDD/PDD model could not meet without major changes to the MDD/PDD interface. In addition, at that point in time (WinCE 3.0) there was no waveapi “software mixer” to allow us to play multiple sounds concurrently, so we knew we would have to take care of that in the driver. The solution was to start over and implement a new design which became informally known as wavedev2 (the original wave driver was under platform\hornet\drivers\wavedev, so when it came time to start on the new design it got put in the wavedev2 subdirectory).
Wavedev2 is a monolithic design in which all the source files are located in a single directory. To port a wavedev2 driver you just copy all the files from an existing sample and start modifying. This actually isn’t as bad as it sounds because in most cases the only files you need to modify are hwctxt.h and hwctxt.cpp. In retrospect it would have been better to put the files in different directories to make this a little more clear, reduce the tendency of OEMs to make random changes in the other files, and simplify the task of fixing bugs in the other files. That's probably something we'll be looking at cleaning up in the future.
The most recent wavedev2 sample was shipped as part of WinCE 6 under public\common\oak\drivers\wavedev\wavedev2\ensoniq. This latest wavedev2 driver includes the following features which are not found on the other driver implementations:
- “MIDI” synthesizer. I put MIDI in quotes because, frankly, it’s a pretty minimal implementation which only supports sine wave output (this will probably be another blog topic). However, it works great for the types of things a phone needs to do: play DTMF and call progress tones and simple melodies. (See The Wavedev2 MIDI Implementation )
- Sample-rate-conversion and mixing on both input and output streams. The driver can mix multiple output streams at different sample rates into a single output stream (something that can now be done with the MDD/PDD driver using the software mixer). It can also split the single input stream and source it to multiple applications at different sample rates (something no other driver design can currently do).
- A “gain class” interface. Each output stream is associated with a specific class. Whenever an app creates a new stream it is associated with class 0, although the app can move its stream to a different class via a waveOutMessage call to the driver. The system can use a separate waveOutMessage call to the driver to control the volume level on a per-class basis. This interface is used by the shell to do things like mute audio playback when a phone call is in progress. This is probably another blog topic for later. (See The Wavedev2 Gainclass Implementation )
- A “forcespeaker” interface which is used by the shell to “hint” to the driver that a specific sound should be played out the speaker even if a headset is plugged in. This is typically used to allow an OEM to play ringtones out a speaker even if a headset is plugged in. (See The Wavedev2 ForceSpeaker API )
- Support for an S/PDIF interface and for streaming of WMAPro compressed content across S/PDIF. This is a recent addition, specific to the Ensoniq version, which was used as a proof-of-concept for the Tomatin project. (See Multichannel Audio in Windows CE )
[Note: a previous version of this blog claimed that a wavedev2 sample shipped with the Tomatin (NMD) feature pack under public\fp_nmd\common\oak\drivers\wavedev\wavedev2\ensoniq. I was wrong; the files did not ship in that release. I apologize to anyone I misled. The sample code in the CE6 release should be backward compatible, although I have no idea of whether there are any licensing issues with using CE6 sample code with a CE5 device]
If you’re developing a Windows Mobile Smartphone or PocketPC Phone, you pretty much have to start with the wavedev2 sample: the system depends on a number of the extensions implemented in wavedev2. On the other hand, if you’re developing an embedded Windows CE product you can use whichever design best fits your needs.
UAM
During the development of WinCE 4.2 the audio team was working on adding support for DirectSound and needed a sample driver to demonstrate exposing DirectSound support from the device driver. As was discovered during the Smartphone effort, retrofitting the MDD/PDD driver would entail a number of changes. Instead, a new monolithic driver was written using some bits of the wavedev2 design, with added support for the Ensoniq-specific feature of mixing two audio streams in hardware (and falling back to the software mixer for any additional streams). While there are superficial similarities between UAM and Wavedev2, they're still pretty different though.
However, support for DirectSound was dropped in WinCE 5.0, and it’s very rare to find audio designs that support mixing audio streams in hardware. There’s absolutely nothing wrong with it, and many OEMs still use it as the basis for their audio driver ports; but for new designs it doesn’t add much value to either of the other models.
Comments
Anonymous
January 12, 2007
In the Windows CE audio stack, the term "mixer" is used to refer to a couple of different, unrelatedAnonymous
January 18, 2007
Hi Andy, This is nice article and interesting as well. We are doing a project on windows mobile where in we are to play and audio, while the user is on call. Can you please suggest me if i can use these API's our solution. I am asking you this question just because in many of the blogs we found that, developing an answering machine, is not possible on Windows Mobile. But after seeing your blog i feel that i can mix voice and noise using the API's. It is really appreciated if you can send me some samples as well. My mail id is pavanng@gmail.com, pavanng@yahoo.com. Please help. Thanks and Regards, Pavan.Anonymous
January 24, 2007
Hi Andy, If the voice notes application is active during a voice call (Circuit-Switched/VoIP), should the wave driver just pass the uplink (or MIC) data or a mix of uplink + downlink data? Passing a mix of uplink + downlink voice can satisfy the use case of voice call recording. Otherwise, is there any other way to support voice call recording use case (if hw permits) in Windows Mobile?Anonymous
January 24, 2007
It's up to the driver writer to decide what to record while a call is active, although I believe they typically just record the mic data. I don't think we dictate that as a requirement to OEMs though.Anonymous
March 20, 2007
Hi, I would like to record the mix of uplink & downlink streams of a voice call (Circuit-Switched & VoIP). I have found that there is a telephony API & Audio APIs. But no API for access the (up/down)call streams. The uplink can be ibtained by recording the mic data. Is there a way to record the downlink data? better still, is there way to record a mix of the uplink & downlink streams? Thanks in advance, EdisonAnonymous
March 22, 2007
The comment has been removedAnonymous
August 18, 2007
Hi, Andy: I am also interested in the answer to Pavan's question "How to mix voice and noise". Would you pls email to me the sample codes to jialy@hotmail.com? Thanks in advance. Pavan: If you have the answer, pls email to me also. Thanks. BR,Anonymous
September 10, 2007
How can I use the mciSendcommand api to control the audio device in mobile5.0? I tried to add the mci32.ocx but couldn't add it to the toolbox. Then I switched to the mcisendcommand but couldn't find in which dll/lib it is. Can you help me? AharonAnonymous
March 17, 2010
Dear Andy, I am developing USB Audio Driver using MDD PDD model and completed USB part of opening control, streaming interfaces, endpoints, etc. Can you kindly provide a pointer of including MDD PDD framework along with my developed usb driver. Regards.Anonymous
October 29, 2010
In Windows CE 5.0, is there any way to return info about the state of the WaveDev driver to the application via waveOutmessage()? Have not been able to locate any info about WaveAPI.dll ThanksAnonymous
October 29, 2010
In Windows CE 5.0, is there any way to return info about the state of the WaveDev driver to the application via waveOutmessage()? Have not been able to locate any info about WaveAPI.dll (MDD/PDD model) ThanksAnonymous
October 29, 2010
In Windows CE 5.0, is there any way to return info about the state of the WaveDev driver to the application via waveOutmessage()? Have not been able to locate any info about WaveAPI.dll (MDD/PDD model) ThanksAnonymous
September 24, 2013
How can I reinstall sound drivers on ARM WM 8505 Wnce netbook. The audio ic is ES8328 but unable to get the drivers.Anonymous
September 25, 2013
Sorry, I don't know if I can help (I didn't know it was even possible to uninstall drivers from a WinCE device). It might be possible to reset the system to factory configuration, but I'm not familiar with that device and you would probably lose whatever content you have. How did you uninstall the drivers in the first place?