Dela via


UCMA 3.0 Core Object Model

The major components that appear in a Microsoft Unified Communications Managed API (UCMA) application are LocalEndpoint (of which two implementations are ApplicationEndpoint and UserEndpoint), Conversation, and CollaborationPlatform. A CollaborationPlatform instance can manage multiple LocalEndpoint instances, and each LocalEndpoint instance can have multiple Conversation instances.

In addition to listing many of the UCMA 3.0 components, the elements in the illustration are arranged in two dimensions. The horizontal axis is divided into two categories: call controls and media controls. Call controls are concerned with signaling data, while media controls are concerned with the instant message (IM) and audio data that is communicated between participants.

The call control category is further subdivided into multiparty controls and two-party controls, which are concerned with, respectively, conversations among three or more participants or those between two participants.

The media control category is further subdivided into media flows, devices, and media providers. Each type of media (IM or audio/video) has its own type of flow. The devices in the devices column can be used to record an audio stream, play an audio stream, and send or receive telephone keypad tones. There are also two devices that, when used in conjunction with Microsoft.Speech object model, can be used to recognize speech, and to synthesize speech. Two of the media providers shown are provided with UCMA 3.0. The third (labeled as ContosoProvider in the illustration) is not provided, but can be implemented by third-party developers. Media providers are not directly accessible, but the flows they provide are accessible.

The color-coded components at the same horizontal level represent the components that take part in a particular communication mode. For example, the AudioVideoProvider sends audio/video media to an AudioVideoFlow, and then to either an AudioVideoCall (for two parties) or to an AudioVideoMcuSession (for more than two parties). The objects shown in the Devices column can attach an AudioVideoFlow, from which audio media can come (Recorder, ToneController, SpeechRecognitionConnector), or to which audio media can go (Player, ToneController, SpeechSynthesisConnector).

The vertical axis is divided into two principal categories: single modal and multimodal. These categories indicate whether communication occurs by means of a single mode (for example, using IM only) or by multiple modes (for example, using IM and audio).

UCMA 3.0 provides built-in support for instant messaging and audio communication modalities. The platform can be extended to provide support for other modalities. The components in the top row in Conversation show the components that third-party developers can create to provide this support.

The relationships among the major components of UCMA 3.0 appear in the following illustration.

Hh347241.ea2cf877-47d7-4537-86c4-46851c736441(en-us,office.14).png