Share via


Note

Please see Azure Cognitive Services for Speech documentation for the latest supported speech solutions.

Microsoft Speech Platform

ISpAudio::SetState

ISpAudio::SetState sets the state of the audio device.

<pre IsFakePre="true" xmlns="http://www.w3.org/1999/xhtml"> <strong>HRESULT SetState(</strong> <strong> <a runat="server" href="jj127454(v=msdn.10).md">SPAUDIOSTATE</a></strong> <em>NewState</em>, <strong> ULONGLONG</strong> <em>ullReserved</em> <strong>);</strong> </pre>

Parameters

  • NewState
    [in] The flag of type SPAUDIOSTATE for the new state of the audio device.
  • ullReserved
    [in] Reserved, do not use. This value must be zero.

Return Values

Value Description
S_OK Function completed successfully.
E_INVALIDARG ullReserved is not zero.
SPERR_DEVICE_BUSY Hardware device is in use by another thread or process.
SPERR_UNSUPPORTED_FORMAT Current format set by ISpAudio::SetFormat is not supported by the hardware device.
SPERR_INVALID_AUDIO_STATE NewState is not set to a valid value.

Remarks

When transitioning from the SPAS_CLOSED state to any other state, the caller should be ready to handle various error conditions, specifically, SPERR_UNSUPPORTED_FORMAT and SPERR_DEVICE_BUSY. Many multi-media devices do not correctly report their capabilities for handling different audio formats and fail only when an attempt is made to open the device.

Also, in many older systems, audio output devices can be opened only by a single process. Therefore, SPERR_DEVICE_BUSY will return if an attempt is made to open a device that is being used by a different process or thread.

On some older sound cards, recording and playback are not possible simultaneously or only possible at the same frequency. An application making use of the input and output audio should be aware of this and in particular attempt to gracefully degrade from higher quality frequencies to the same frequency for both if the sound card makes this necessary.

In general, applications need not change the state of the audio device directly. The Speech Platform will automatically manage the state of the audio device based on the state of all the grammars, recognition contexts, and the recognizer instance.