快速入門:將原始媒體存取新增至您的應用程式

在本快速入門中,您將瞭解如何使用適用於 Unity 的 Azure 通訊服務 呼叫 SDK 來實作原始媒體存取。 Azure 通訊服務呼叫 SDK 提供 API,可讓應用程式產生自己的視訊畫面,以在通話中從遠端參與者傳送或轉譯原始視訊畫面。 本快速入門以 快速入門為基礎:將 1:1 視訊通話新增至適用於 Unity 的應用程式

RawVideo 存取

因為應用程式會產生視訊畫面,因此應用程式必須通知 Azure 通訊服務 呼叫 SDK,瞭解應用程式可以產生的視訊格式。 這項資訊可讓 Azure 通訊服務呼叫 SDK 挑選當時網路狀況的最佳視訊格式組態。

虛擬視訊

支援的視訊解析度

外觀比例 解決方法 FPS 上限
16x9 1080p 30
16x9 720p 30
16x9 540p 30
16x9 480p 30
16x9 360p 30
16x9 270p 15
16x9 240p 15
16x9 180p 15
4x3 VGA (640x480) 30
4x3 424x320 15
4x3 QVGA (320x240) 15
4x3 212x160 15
  1. 請遵循這裡的 步驟快速入門:將 1:1 視訊通話新增至您的應用程式 以建立 Unity 遊戲。 目標是取得 CallAgent 準備好開始呼叫的物件。 在 GitHub尋找本快速入門的完成程式代碼。

  2. 使用 SDK 支援的 VideoStreamPixelFormat 建立 的陣列 VideoFormat 。 當有多個格式可用時,清單中的格式順序不會影響或設定使用格式的優先順序。 格式選取的準則是以網路頻寬等外部因素為基礎。

    var videoStreamFormat = new VideoStreamFormat
    {
        Resolution = VideoStreamResolution.P360, // For VirtualOutgoingVideoStream the width/height should be set using VideoStreamResolution enum
        PixelFormat = VideoStreamPixelFormat.Rgba,
        FramesPerSecond = 15,
        Stride1 = 640 * 4 // It is times 4 because RGBA is a 32-bit format
    };
    VideoStreamFormat[] videoStreamFormats = { videoStreamFormat };
    
  3. 建立 RawOutgoingVideoStreamOptions,並使用先前建立的對象進行設定 Formats

    var rawOutgoingVideoStreamOptions = new RawOutgoingVideoStreamOptions
    {
        Formats = videoStreamFormats
    };
    
  4. 使用您先前建立的 實例,RawOutgoingVideoStreamOptions建立的VirtualOutgoingVideoStream實例。

    var rawOutgoingVideoStream = new VirtualOutgoingVideoStream(rawOutgoingVideoStreamOptions);
    
  5. RawOutgoingVideoStream.FormatChanged訂閱委派。 此事件會通知每當 VideoStreamFormat 已從清單上提供的其中一個視訊格式變更時。

    rawOutgoingVideoStream.FormatChanged += (object sender, VideoStreamFormatChangedEventArgs args)
    {
        VideoStreamFormat videoStreamFormat = args.Format;
    }
    
  6. RawOutgoingVideoStream.StateChanged訂閱委派。 每當 變更時, State 就會通知此事件。

    rawOutgoingVideoStream.StateChanged += (object sender, VideoStreamFormatChangedEventArgs args)
    {
        CallVideoStream callVideoStream = e.Stream;
    
        switch (callVideoStream.Direction)
        {
            case StreamDirection.Outgoing:
                OnRawOutgoingVideoStreamStateChanged(callVideoStream as OutgoingVideoStream);
                break;
            case StreamDirection.Incoming:
                OnRawIncomingVideoStreamStateChanged(callVideoStream as IncomingVideoStream);
                break;
        }
    }
    
  7. 處理原始傳出視訊串流狀態交易,例如啟動和停止,並開始產生自定義視訊畫面或暫停畫面產生演算法。

    private async void OnRawOutgoingVideoStreamStateChanged(OutgoingVideoStream outgoingVideoStream)
    {
        switch (outgoingVideoStream.State)
        {
            case VideoStreamState.Started:
                switch (outgoingVideoStream.Kind)
                {
                    case VideoStreamKind.VirtualOutgoing:
                        outgoingVideoPlayer.StartGenerateFrames(outgoingVideoStream); // This is where a background worker thread can be started to feed the outgoing video frames.
                        break;
                }
                break;
    
            case VideoStreamState.Stopped:
                switch (outgoingVideoStream.Kind)
                {
                    case VideoStreamKind.VirtualOutgoing:
                        break;
                }
                break;
        }
    }
    

    以下是傳出視訊畫面產生器的範例:

    private unsafe RawVideoFrame GenerateRawVideoFrame(RawOutgoingVideoStream rawOutgoingVideoStream)
    {
        var format = rawOutgoingVideoStream.Format;
        int w = format.Width;
        int h = format.Height;
        int rgbaCapacity = w * h * 4;
    
        var rgbaBuffer = new NativeBuffer(rgbaCapacity);
        rgbaBuffer.GetData(out IntPtr rgbaArrayBuffer, out rgbaCapacity);
    
        byte r = (byte)random.Next(1, 255);
        byte g = (byte)random.Next(1, 255);
        byte b = (byte)random.Next(1, 255);
    
        for (int y = 0; y < h; y++)
        {
            for (int x = 0; x < w*4; x += 4)
            {
                ((byte*)rgbaArrayBuffer)[(w * 4 * y) + x + 0] = (byte)(y % r);
                ((byte*)rgbaArrayBuffer)[(w * 4 * y) + x + 1] = (byte)(y % g);
                ((byte*)rgbaArrayBuffer)[(w * 4 * y) + x + 2] = (byte)(y % b);
                ((byte*)rgbaArrayBuffer)[(w * 4 * y) + x + 3] = 255;
            }
        }
    
        // Call ACS Unity SDK API to deliver the frame
        rawOutgoingVideoStream.SendRawVideoFrameAsync(new RawVideoFrameBuffer() {
            Buffers = new NativeBuffer[] { rgbaBuffer },
            StreamFormat = rawOutgoingVideoStream.Format,
            TimestampInTicks = rawOutgoingVideoStream.TimestampInTicks
        }).Wait();
    
        return new RawVideoFrameBuffer()
        {
            Buffers = new NativeBuffer[] { rgbaBuffer },
            StreamFormat = rawOutgoingVideoStream.Format
        };
    }
    

    注意

    unsafe 修飾詞用於此方法,因為 NativeBuffer 需要存取原生記憶體資源。 因此, Allow unsafe 也必須在 Unity 編輯器中啟用選項。

  8. 同樣地,我們可以處理傳入的視訊畫面,以響應視訊串流 StateChanged 事件。

    private void OnRawIncomingVideoStreamStateChanged(IncomingVideoStream incomingVideoStream)
    {
        switch (incomingVideoStream.State)
        {
            case VideoStreamState.Available:
                {
                    var rawIncomingVideoStream = incomingVideoStream as RawIncomingVideoStream;
                    rawIncomingVideoStream.RawVideoFrameReceived += OnRawVideoFrameReceived;
                    rawIncomingVideoStream.Start();
                    break;
                }
            case VideoStreamState.Stopped:
                break;
            case VideoStreamState.NotAvailable:
                break;
        }
    }
    
    private void OnRawVideoFrameReceived(object sender, RawVideoFrameReceivedEventArgs e)
    {
        incomingVideoPlayer.RenderRawVideoFrame(e.Frame);
    }
    
    public void RenderRawVideoFrame(RawVideoFrame rawVideoFrame)
    {
        var videoFrameBuffer = rawVideoFrame as RawVideoFrameBuffer;
        pendingIncomingFrames.Enqueue(new PendingFrame() {
                frame = rawVideoFrame,
                kind = RawVideoFrameKind.Buffer });
    }
    
  9. 強烈建議透過緩衝機制管理傳入和傳出視訊畫面,以避免多載 MonoBehaviour.Update() 回呼方法,這應該保持光線,避免 CPU 或網路繁重的責任,並確保更順暢的視訊體驗。 此選擇性優化留給開發人員,以決定最適合其案例的內容。

    以下是如何藉由呼叫Graphics.Blit內部佇列,將傳入框架轉譯至 Unity VideoTexture 的範例:

    private void Update()
    {
        if (pendingIncomingFrames.TryDequeue(out PendingFrame pendingFrame))
        {
            switch (pendingFrame.kind)
            {
                case RawVideoFrameKind.Buffer:
                    var videoFrameBuffer = pendingFrame.frame as RawVideoFrameBuffer;
                    VideoStreamFormat videoFormat = videoFrameBuffer.StreamFormat;
                    int width = videoFormat.Width;
                    int height = videoFormat.Height;
                    var texture = new Texture2D(width, height, TextureFormat.RGBA32, mipChain: false);
    
                    var buffers = videoFrameBuffer.Buffers;
                    NativeBuffer buffer = buffers.Count > 0 ? buffers[0] : null;
                    buffer.GetData(out IntPtr bytes, out int signedSize);
    
                    texture.LoadRawTextureData(bytes, signedSize);
                    texture.Apply();
    
                    Graphics.Blit(source: texture, dest: rawIncomingVideoRenderTexture);
                    break;
    
                case RawVideoFrameKind.Texture:
                    break;
            }
            pendingFrame.frame.Dispose();
        }
    }
    

在本快速入門中,您將瞭解如何使用適用於 Windows 的 Azure 通訊服務 呼叫 SDK 來實作原始媒體存取。 Azure 通訊服務呼叫 SDK 提供 API,可讓應用程式產生自己的視訊畫面,以在通話中傳送給遠端參與者。 本快速入門以 快速入門為基礎:將 1:1 視訊通話新增至適用於 Windows 的應用程式

RawAudio 存取

存取原始音訊媒體可讓您存取來電的音訊串流,以及在通話期間檢視和傳送自定義傳出音訊串流的能力。

傳送原始傳出音訊

建立 options 物件,指定我們想要傳送的原始數據流屬性。

    RawOutgoingAudioStreamProperties outgoingAudioProperties = new RawOutgoingAudioStreamProperties()
    {
        Format = ACSAudioStreamFormat.Pcm16Bit,
        SampleRate = AudioStreamSampleRate.Hz48000,
        ChannelMode = AudioStreamChannelMode.Stereo,
        BufferDuration = AudioStreamBufferDuration.InMs20
    };
    RawOutgoingAudioStreamOptions outgoingAudioStreamOptions = new RawOutgoingAudioStreamOptions()
    {
        Properties = outgoingAudioProperties
    };

RawOutgoingAudioStream建立 並附加它以聯結通話選項,且串流會在呼叫連線時自動啟動。

    JoinCallOptions options =  JoinCallOptions(); // or StartCallOptions()
    OutgoingAudioOptions outgoingAudioOptions = new OutgoingAudioOptions();
    RawOutgoingAudioStream rawOutgoingAudioStream = new RawOutgoingAudioStream(outgoingAudioStreamOptions);
    outgoingAudioOptions.Stream = rawOutgoingAudioStream;
    options.OutgoingAudioOptions = outgoingAudioOptions;
    // Start or Join call with those call options.

將數據流附加至呼叫

或者,您也可以改為將數據流附加至現有的 Call 實例:

    await call.StartAudio(rawOutgoingAudioStream);

開始傳送原始範例

一旦數據流狀態為 AudioStreamState.Started,我們才能開始傳送數據。 若要觀察音訊串流狀態變更,請將接聽程式新增至 OnStateChangedListener 事件。

    unsafe private void AudioStateChanged(object sender, AudioStreamStateChanged args)
    {
        if (args.AudioStreamState == AudioStreamState.Started)
        {
            // We can now start sending samples.
        }
    }
    outgoingAudioStream.StateChanged += AudioStateChanged;

當串流啟動時,我們可以開始將音訊範例傳送 MemoryBuffer 至通話。 音訊緩衝區格式應該符合指定的數據流屬性。

    void Start()
    {
        RawOutgoingAudioStreamProperties properties = outgoingAudioStream.Properties;
        RawAudioBuffer buffer;
        new Thread(() =>
        {
            DateTime nextDeliverTime = DateTime.Now;
            while (true)
            {
                MemoryBuffer memoryBuffer = new MemoryBuffer((uint)outgoingAudioStream.ExpectedBufferSizeInBytes);
                using (IMemoryBufferReference reference = memoryBuffer.CreateReference())
                {
                    byte* dataInBytes;
                    uint capacityInBytes;
                    ((IMemoryBufferByteAccess)reference).GetBuffer(out dataInBytes, out capacityInBytes);
                    // Use AudioGraph here to grab data from microphone if you want microphone data
                }
                nextDeliverTime = nextDeliverTime.AddMilliseconds(20);
                buffer = new RawAudioBuffer(memoryBuffer);
                outgoingAudioStream.SendOutgoingAudioBuffer(buffer);
                TimeSpan wait = nextDeliverTime - DateTime.Now;
                if (wait > TimeSpan.Zero)
                {
                    Thread.Sleep(wait);
                }
            }
        }).Start();
    }

接收原始傳入音訊

我們也可以接收通話音訊串流範例,就像我們想要在播放之前處理通話音訊串流一樣 MemoryBuffer 。 建立 RawIncomingAudioStreamOptions 物件,指定我們想要接收的原始數據流屬性。

    RawIncomingAudioStreamProperties properties = new RawIncomingAudioStreamProperties()
    {
        Format = AudioStreamFormat.Pcm16Bit,
        SampleRate = AudioStreamSampleRate.Hz44100,
        ChannelMode = AudioStreamChannelMode.Stereo
    };
    RawIncomingAudioStreamOptions options = new RawIncomingAudioStreamOptions()
    {
        Properties = properties
    };

建立 RawIncomingAudioStream 並附加它以加入通話選項

    JoinCallOptions options =  JoinCallOptions(); // or StartCallOptions()
    RawIncomingAudioStream rawIncomingAudioStream = new RawIncomingAudioStream(audioStreamOptions);
    IncomingAudioOptions incomingAudioOptions = new IncomingAudioOptions()
    {
        Stream = rawIncomingAudioStream
    };
    options.IncomingAudioOptions = incomingAudioOptions;

或者,我們也可以改為將數據流附加至現有的 Call 實例:

    await call.startAudio(context, rawIncomingAudioStream);

若要開始接收來自傳入數據流的原始音訊緩衝區,請將接聽程式新增至傳入數據流狀態和緩衝區接收的事件。

    unsafe private void OnAudioStateChanged(object sender, AudioStreamStateChanged args)
    {
        if (args.AudioStreamState == AudioStreamState.Started)
        {
            // When value is `AudioStreamState.STARTED` we'll be able to receive samples.
        }
    }
    private void OnRawIncomingMixedAudioBufferAvailable(object sender, IncomingMixedAudioEventArgs args)
    {
        // Received a raw audio buffers(MemoryBuffer).
        using (IMemoryBufferReference reference = args.IncomingAudioBuffer.Buffer.CreateReference())
        {
            byte* dataInBytes;
            uint capacityInBytes;
            ((IMemoryBufferByteAccess)reference).GetBuffer(out dataInBytes, out capacityInBytes);
            // Process the data using AudioGraph class
        }
    }
    rawIncomingAudioStream.StateChanged += OnAudioStateChanged;
    rawIncomingAudioStream.MixedAudioBufferReceived += OnRawIncomingMixedAudioBufferAvailable;

RawVideo 存取

因為應用程式會產生視訊畫面,因此應用程式必須通知 Azure 通訊服務 呼叫 SDK,瞭解應用程式可以產生的視訊格式。 這項資訊可讓 Azure 通訊服務 通話 SDK 挑選當時網路狀況的最佳視訊格式組態。

虛擬視訊

支援的視訊解析度

外觀比例 解決方法 FPS 上限
16x9 1080p 30
16x9 720p 30
16x9 540p 30
16x9 480p 30
16x9 360p 30
16x9 270p 15
16x9 240p 15
16x9 180p 15
4x3 VGA (640x480) 30
4x3 424x320 15
4x3 QVGA (320x240) 15
4x3 212x160 15
  1. 使用 SDK 支援的 VideoStreamPixelFormat 建立 的陣列 VideoFormat 。 當有多個格式可用時,清單中的格式順序不會影響或設定使用格式的優先順序。 格式選取的準則是以網路頻寬等外部因素為基礎。

    var videoStreamFormat = new VideoStreamFormat
    {
        Resolution = VideoStreamResolution.P720, // For VirtualOutgoingVideoStream the width/height should be set using VideoStreamResolution enum
        PixelFormat = VideoStreamPixelFormat.Rgba,
        FramesPerSecond = 30,
        Stride1 = 1280 * 4 // It is times 4 because RGBA is a 32-bit format
    };
    VideoStreamFormat[] videoStreamFormats = { videoStreamFormat };
    
  2. 建立 RawOutgoingVideoStreamOptions,並使用先前建立的對象進行設定 Formats

    var rawOutgoingVideoStreamOptions = new RawOutgoingVideoStreamOptions
    {
        Formats = videoStreamFormats
    };
    
  3. 使用您先前建立的 實例,RawOutgoingVideoStreamOptions建立的VirtualOutgoingVideoStream實例。

    var rawOutgoingVideoStream = new VirtualOutgoingVideoStream(rawOutgoingVideoStreamOptions);
    
  4. RawOutgoingVideoStream.FormatChanged訂閱委派。 此事件會通知每當 VideoStreamFormat 已從清單上提供的其中一個視訊格式變更時。

    rawOutgoingVideoStream.FormatChanged += (object sender, VideoStreamFormatChangedEventArgs args)
    {
        VideoStreamFormat videoStreamFormat = args.Format;
    }
    
  5. 建立下列協助程序類別的實例,以存取緩衝區數據

    [ComImport]
    [Guid("5B0D3235-4DBA-4D44-865E-8F1D0E4FD04D")]
    [InterfaceType(ComInterfaceType.InterfaceIsIUnknown)]
    unsafe interface IMemoryBufferByteAccess
    {
        void GetBuffer(out byte* buffer, out uint capacity);
    }
    [ComImport]
    [Guid("905A0FEF-BC53-11DF-8C49-001E4FC686DA")]
    [InterfaceType(ComInterfaceType.InterfaceIsIUnknown)]
    unsafe interface IBufferByteAccess
    {
        void Buffer(out byte* buffer);
    }
    internal static class BufferExtensions
    {
        // For accessing MemoryBuffer
        public static unsafe byte* GetArrayBuffer(IMemoryBuffer memoryBuffer)
        {
            IMemoryBufferReference memoryBufferReference = memoryBuffer.CreateReference();
            var memoryBufferByteAccess = memoryBufferReference as IMemoryBufferByteAccess;
            memoryBufferByteAccess.GetBuffer(out byte* arrayBuffer, out uint arrayBufferCapacity);
            GC.AddMemoryPressure(arrayBufferCapacity);
            return arrayBuffer;
        }
        // For accessing MediaStreamSample
        public static unsafe byte* GetArrayBuffer(IBuffer buffer)
        {
            var bufferByteAccess = buffer as IBufferByteAccess;
            bufferByteAccess.Buffer(out byte* arrayBuffer);
            uint arrayBufferCapacity = buffer.Capacity;
            GC.AddMemoryPressure(arrayBufferCapacity);
            return arrayBuffer;
        }
    }
    
  6. 建立下列協助程序類別的實例,以使用 產生隨機 RawVideoFrameVideoStreamPixelFormat.Rgba

    public class VideoFrameSender
    {
        private RawOutgoingVideoStream rawOutgoingVideoStream;
        private RawVideoFrameKind rawVideoFrameKind;
        private Thread frameIteratorThread;
        private Random random = new Random();
        private volatile bool stopFrameIterator = false;
        public VideoFrameSender(RawVideoFrameKind rawVideoFrameKind, RawOutgoingVideoStream rawOutgoingVideoStream)
        {
            this.rawVideoFrameKind = rawVideoFrameKind;
            this.rawOutgoingVideoStream = rawOutgoingVideoStream;
        }
        public async void VideoFrameIterator()
        {
            while (!stopFrameIterator)
            {
                if (rawOutgoingVideoStream != null &&
                    rawOutgoingVideoStream.Format != null &&
                    rawOutgoingVideoStream.State == VideoStreamState.Started)
                {
                    await SendRandomVideoFrameRGBA();
                }
            }
        }
        private async Task SendRandomVideoFrameRGBA()
        {
            uint rgbaCapacity = (uint)(rawOutgoingVideoStream.Format.Width * rawOutgoingVideoStream.Format.Height * 4);
            RawVideoFrame videoFrame = null;
            switch (rawVideoFrameKind)
            {
                case RawVideoFrameKind.Buffer:
                    videoFrame = 
                        GenerateRandomVideoFrameBuffer(rawOutgoingVideoStream.Format, rgbaCapacity);
                    break;
                case RawVideoFrameKind.Texture:
                    videoFrame = 
                        GenerateRandomVideoFrameTexture(rawOutgoingVideoStream.Format, rgbaCapacity);
                    break;
            }
            try
            {
                using (videoFrame)
                {
                    await rawOutgoingVideoStream.SendRawVideoFrameAsync(videoFrame);
                }
            }
            catch (Exception ex)
            {
                string msg = ex.Message;
            }
            try
            {
                int delayBetweenFrames = (int)(1000.0 / rawOutgoingVideoStream.Format.FramesPerSecond);
                await Task.Delay(delayBetweenFrames);
            }
            catch (Exception ex)
            {
                string msg = ex.Message;
            }
        }
        private unsafe RawVideoFrame GenerateRandomVideoFrameBuffer(VideoStreamFormat videoFormat, uint rgbaCapacity)
        {
            var rgbaBuffer = new MemoryBuffer(rgbaCapacity);
            byte* rgbaArrayBuffer = BufferExtensions.GetArrayBuffer(rgbaBuffer);
            GenerateRandomVideoFrame(&rgbaArrayBuffer);
            return new RawVideoFrameBuffer()
            {
                Buffers = new MemoryBuffer[] { rgbaBuffer },
                StreamFormat = videoFormat
            };
        }
        private unsafe RawVideoFrame GenerateRandomVideoFrameTexture(VideoStreamFormat videoFormat, uint rgbaCapacity)
        {
            var timeSpan = new TimeSpan(rawOutgoingVideoStream.TimestampInTicks);
            var rgbaBuffer = new Buffer(rgbaCapacity)
            {
                Length = rgbaCapacity
            };
            byte* rgbaArrayBuffer = BufferExtensions.GetArrayBuffer(rgbaBuffer);
            GenerateRandomVideoFrame(&rgbaArrayBuffer);
            var mediaStreamSample = MediaStreamSample.CreateFromBuffer(rgbaBuffer, timeSpan);
            return new RawVideoFrameTexture()
            {
                Texture = mediaStreamSample,
                StreamFormat = videoFormat
            };
        }
        private unsafe void GenerateRandomVideoFrame(byte** rgbaArrayBuffer)
        {
            int w = rawOutgoingVideoStream.Format.Width;
            int h = rawOutgoingVideoStream.Format.Height;
            byte r = (byte)random.Next(1, 255);
            byte g = (byte)random.Next(1, 255);
            byte b = (byte)random.Next(1, 255);
            int rgbaStride = w * 4;
            for (int y = 0; y < h; y++)
            {
                for (int x = 0; x < rgbaStride; x += 4)
                {
                    (*rgbaArrayBuffer)[(w * 4 * y) + x + 0] = (byte)(y % r);
                    (*rgbaArrayBuffer)[(w * 4 * y) + x + 1] = (byte)(y % g);
                    (*rgbaArrayBuffer)[(w * 4 * y) + x + 2] = (byte)(y % b);
                    (*rgbaArrayBuffer)[(w * 4 * y) + x + 3] = 255;
                }
            }
        }
        public void Start()
        {
            frameIteratorThread = new Thread(VideoFrameIterator);
            frameIteratorThread.Start();
        }
        public void Stop()
        {
            try
            {
                if (frameIteratorThread != null)
                {
                    stopFrameIterator = true;
                    frameIteratorThread.Join();
                    frameIteratorThread = null;
                    stopFrameIterator = false;
                }
            }
            catch (Exception ex)
            {
                string msg = ex.Message;
            }
        }
    }
    
  7. VideoStream.StateChanged訂閱委派。 此事件會通知目前數據流的狀態。 如果狀態不等於 VideoStreamState.Started,請勿傳送畫面格。

    private VideoFrameSender videoFrameSender;
    rawOutgoingVideoStream.StateChanged += (object sender, VideoStreamStateChangedEventArgs args) =>
    {
        CallVideoStream callVideoStream = args.Stream;
        switch (callVideoStream.State)
            {
                case VideoStreamState.Available:
                    // VideoStream has been attached to the call
                    var frameKind = RawVideoFrameKind.Buffer; // Use the frameKind you prefer
                    //var frameKind = RawVideoFrameKind.Texture;
                    videoFrameSender = new VideoFrameSender(frameKind, rawOutgoingVideoStream);
                    break;
                case VideoStreamState.Started:
                    // Start sending frames
                    videoFrameSender.Start();
                    break;
                case VideoStreamState.Stopped:
                    // Stop sending frames
                    videoFrameSender.Stop();
                    break;
            }
    };
    

屏幕共用視訊

因為 Windows 系統會產生框架,您必須實作自己的前景服務,以擷取框架,並使用 Azure 通訊服務 呼叫 API 來傳送這些畫面。

支援的視訊解析度

外觀比例 解決方法 FPS 上限
任何項目 任何高達 1080p 的任何專案 30

建立螢幕共用視訊串流的步驟

  1. 使用 SDK 支援的 VideoStreamPixelFormat 建立 的陣列 VideoFormat 。 當有多個格式可用時,清單中的格式順序不會影響或設定使用格式的優先順序。 格式選取的準則是以網路頻寬等外部因素為基礎。
    var videoStreamFormat = new VideoStreamFormat
    {
        Width = 1280, // Width and height can be used for ScreenShareOutgoingVideoStream for custom resolutions or use one of the predefined values inside VideoStreamResolution
        Height = 720,
        //Resolution = VideoStreamResolution.P720,
        PixelFormat = VideoStreamPixelFormat.Rgba,
        FramesPerSecond = 30,
        Stride1 = 1280 * 4 // It is times 4 because RGBA is a 32-bit format.
    };
    VideoStreamFormat[] videoStreamFormats = { videoStreamFormat };
    
  2. 建立 RawOutgoingVideoStreamOptions,並使用先前建立的對象進行設定 VideoFormats
    var rawOutgoingVideoStreamOptions = new RawOutgoingVideoStreamOptions
    {
        Formats = videoStreamFormats
    };
    
  3. 使用您先前建立的 實例,RawOutgoingVideoStreamOptions建立的VirtualOutgoingVideoStream實例。
    var rawOutgoingVideoStream = new ScreenShareOutgoingVideoStream(rawOutgoingVideoStreamOptions);
    
  4. 以下列方式擷取和傳送視訊畫面。
    private async Task SendRawVideoFrame()
    {
        RawVideoFrame videoFrame = null;
        switch (rawVideoFrameKind) //it depends on the frame kind you want to send
        {
            case RawVideoFrameKind.Buffer:
                MemoryBuffer memoryBuffer = // Fill it with the content you got from the Windows APIs
                videoFrame = new RawVideoFrameBuffer()
                {
                    Buffers = memoryBuffer // The number of buffers depends on the VideoStreamPixelFormat
                    StreamFormat = rawOutgoingVideoStream.Format
                };
                break;
            case RawVideoFrameKind.Texture:
                MediaStreamSample mediaStreamSample = // Fill it with the content you got from the Windows APIs
                videoFrame = new RawVideoFrameTexture()
                {
                    Texture = mediaStreamSample, // Texture only receive planar buffers
                    StreamFormat = rawOutgoingVideoStream.Format
                };
                break;
        }
    
        try
        {
            using (videoFrame)
            {
                await rawOutgoingVideoStream.SendRawVideoFrameAsync(videoFrame);
            }
        }
        catch (Exception ex)
        {
            string msg = ex.Message;
        }
    
        try
        {
            int delayBetweenFrames = (int)(1000.0 / rawOutgoingVideoStream.Format.FramesPerSecond);
            await Task.Delay(delayBetweenFrames);
        }
        catch (Exception ex)
        {
            string msg = ex.Message;
        }
    }
    

原始傳入影片

這項功能可讓您存取 內的 IncomingVideoStream視訊畫面,以便在本機操作這些串流

  1. 透過設定建立的 IncomingVideoOptionsJoinCallOptions 實例 VideoStreamKind.RawIncoming
    var frameKind = RawVideoFrameKind.Buffer;  // Use the frameKind you prefer to receive
    var incomingVideoOptions = new IncomingVideoOptions
    {
        StreamKind = VideoStreamKind.RawIncoming,
        FrameKind = frameKind
    };
    var joinCallOptions = new JoinCallOptions
    {
        IncomingVideoOptions = incomingVideoOptions
    };
    
  2. 收到 ParticipantsUpdatedEventArgs 事件附加 RemoteParticipant.VideoStreamStateChanged 委派之後。 這個事件會通知物件的狀態 IncomingVideoStream
    private List<RemoteParticipant> remoteParticipantList;
    private void OnRemoteParticipantsUpdated(object sender, ParticipantsUpdatedEventArgs args)
    {
        foreach (RemoteParticipant remoteParticipant in args.AddedParticipants)
        {
            IReadOnlyList<IncomingVideoStream> incomingVideoStreamList = remoteParticipant.IncomingVideoStreams; // Check if there are IncomingVideoStreams already before attaching the delegate
            foreach (IncomingVideoStream incomingVideoStream in incomingVideoStreamList)
            {
                OnRawIncomingVideoStreamStateChanged(incomingVideoStream);
            }
            remoteParticipant.VideoStreamStateChanged += OnVideoStreamStateChanged;
            remoteParticipantList.Add(remoteParticipant); // If the RemoteParticipant ref is not kept alive the VideoStreamStateChanged events are going to be missed
        }
        foreach (RemoteParticipant remoteParticipant in args.RemovedParticipants)
        {
            remoteParticipant.VideoStreamStateChanged -= OnVideoStreamStateChanged;
            remoteParticipantList.Remove(remoteParticipant);
        }
    }
    private void OnVideoStreamStateChanged(object sender, VideoStreamStateChangedEventArgs args)
    {
        CallVideoStream callVideoStream = args.Stream;
        OnRawIncomingVideoStreamStateChanged(callVideoStream as RawIncomingVideoStream);
    }
    private void OnRawIncomingVideoStreamStateChanged(RawIncomingVideoStream rawIncomingVideoStream)
    {
        switch (incomingVideoStream.State)
        {
            case VideoStreamState.Available:
                // There is a new IncomingVideoStream
                rawIncomingVideoStream.RawVideoFrameReceived += OnVideoFrameReceived;
                rawIncomingVideoStream.Start();
                break;
            case VideoStreamState.Started:
                // Will start receiving video frames
                break;
            case VideoStreamState.Stopped:
                // Will stop receiving video frames
                break;
            case VideoStreamState.NotAvailable:
                // The IncomingVideoStream should not be used anymore
                rawIncomingVideoStream.RawVideoFrameReceived -= OnVideoFrameReceived;
                break;
        }
    }
    
  3. 此時,具有VideoStreamState.Available狀態附加RawIncomingVideoStream.RawVideoFrameReceived委派,IncomingVideoStream如上一個步驟所示。 這提供新的 RawVideoFrame 物件。
    private async void OnVideoFrameReceived(object sender, RawVideoFrameReceivedEventArgs args)
    {
        RawVideoFrame videoFrame = args.Frame;
        switch (videoFrame.Kind) // The type will be whatever was configured on the IncomingVideoOptions
        {
            case RawVideoFrameKind.Buffer:
                // Render/Modify/Save the video frame
                break;
            case RawVideoFrameKind.Texture:
                // Render/Modify/Save the video frame
                break;
        }
    }
    

在本快速入門中,您將瞭解如何使用適用於 Android 的 Azure 通訊服務 呼叫 SDK 來實作原始媒體存取。

Azure 通訊服務 通話 SDK 提供 API,可讓應用程式產生自己的視訊畫面,以在通話中傳送給遠端參與者。

本快速入門以 快速入門為基礎:將 1:1 視訊通話新增至適用於 Android 的應用程式

RawAudio 存取

存取原始音訊媒體可讓您存取通話的傳入音訊串流,以及在通話期間檢視和傳送自定義傳出音訊串流的能力。

傳送原始傳出音訊

建立 options 物件,指定我們想要傳送的原始數據流屬性。

    RawOutgoingAudioStreamProperties outgoingAudioProperties = new RawOutgoingAudioStreamProperties()
                .setAudioFormat(AudioStreamFormat.PCM16_BIT)
                .setSampleRate(AudioStreamSampleRate.HZ44100)
                .setChannelMode(AudioStreamChannelMode.STEREO)
                .setBufferDuration(AudioStreamBufferDuration.IN_MS20);

    RawOutgoingAudioStreamOptions outgoingAudioStreamOptions = new RawOutgoingAudioStreamOptions()
                .setProperties(outgoingAudioProperties);

RawOutgoingAudioStream建立 並附加它以聯結通話選項,且串流會在呼叫連線時自動啟動。

    JoinCallOptions options = JoinCallOptions() // or StartCallOptions()

    OutgoingAudioOptions outgoingAudioOptions = new OutgoingAudioOptions();
    RawOutgoingAudioStream rawOutgoingAudioStream = new RawOutgoingAudioStream(outgoingAudioStreamOptions);

    outgoingAudioOptions.setStream(rawOutgoingAudioStream);
    options.setOutgoingAudioOptions(outgoingAudioOptions);

    // Start or Join call with those call options.

將數據流附加至呼叫

或者,您也可以改為將數據流附加至現有的 Call 實例:

    CompletableFuture<Void> result = call.startAudio(context, rawOutgoingAudioStream);

開始傳送原始範例

一旦數據流狀態為 AudioStreamState.STARTED,我們才能開始傳送數據。 若要觀察音訊串流狀態變更,請將接聽程式新增至 OnStateChangedListener 事件。

    private void onStateChanged(PropertyChangedEvent propertyChangedEvent) {
        // When value is `AudioStreamState.STARTED` we'll be able to send audio samples.
    }

    rawOutgoingAudioStream.addOnStateChangedListener(this::onStateChanged)

當串流啟動時,我們可以開始將音訊範例傳送 java.nio.ByteBuffer 至通話。

音訊緩衝區格式應該符合指定的數據流屬性。

    Thread thread = new Thread(){
        public void run() {
            RawAudioBuffer buffer;
            Calendar nextDeliverTime = Calendar.getInstance();
            while (true)
            {
                nextDeliverTime.add(Calendar.MILLISECOND, 20);
                byte data[] = new byte[outgoingAudioStream.getExpectedBufferSizeInBytes()];
                //can grab microphone data from AudioRecord
                ByteBuffer dataBuffer = ByteBuffer.allocateDirect(outgoingAudioStream.getExpectedBufferSizeInBytes());
                dataBuffer.rewind();
                buffer = new RawAudioBuffer(dataBuffer);
                outgoingAudioStream.sendOutgoingAudioBuffer(buffer);
                long wait = nextDeliverTime.getTimeInMillis() - Calendar.getInstance().getTimeInMillis();
                if (wait > 0)
                {
                    try {
                        Thread.sleep(wait);
                    } catch (InterruptedException e) {
                        e.printStackTrace();
                    }
                }
            }
        }
    };
    thread.start();

接收原始傳入音訊

我們也可以接收通話音訊串流範例,就像我們想要在播放前處理音訊一樣 java.nio.ByteBuffer

建立 RawIncomingAudioStreamOptions 物件,指定我們想要接收的原始數據流屬性。

    RawIncomingAudioStreamOptions options = new RawIncomingAudioStreamOptions();
    RawIncomingAudioStreamProperties properties = new RawIncomingAudioStreamProperties()
                .setAudioFormat(AudioStreamFormat.PCM16_BIT)
                .setSampleRate(AudioStreamSampleRate.HZ44100)
                .setChannelMode(AudioStreamChannelMode.STEREO);
    options.setProperties(properties);

建立 RawIncomingAudioStream 並附加它以加入通話選項

    JoinCallOptions options =  JoinCallOptions() // or StartCallOptions()
    IncomingAudioOptions incomingAudioOptions = new IncomingAudioOptions();

    RawIncomingAudioStream rawIncomingAudioStream = new RawIncomingAudioStream(audioStreamOptions);
    incomingAudioOptions.setStream(rawIncomingAudioStream);
    options.setIncomingAudioOptions(incomingAudioOptions);

或者,我們也可以改為將數據流附加至現有的 Call 實例:


    CompletableFuture<Void> result = call.startAudio(context, rawIncomingAudioStream);

若要開始接收來自傳入數據流的原始音訊緩衝區,請將接聽程式新增至傳入數據流狀態和緩衝區接收的事件。

    private void onStateChanged(PropertyChangedEvent propertyChangedEvent) {
        // When value is `AudioStreamState.STARTED` we'll be able to receive samples.
    }

    private void onMixedAudioBufferReceived(IncomingMixedAudioEvent incomingMixedAudioEvent) {
        // Received a raw audio buffers(java.nio.ByteBuffer).
    }

    rawIncomingAudioStream.addOnStateChangedListener(this::onStateChanged);
    rawIncomingAudioStream.addMixedAudioBufferReceivedListener(this::onMixedAudioBufferReceived);

請務必記得停止目前呼叫 Call 實例中的音訊數據流:


    CompletableFuture<Void> result = call.stopAudio(context, rawIncomingAudioStream);

RawVideo 存取

因為應用程式會產生視訊畫面,因此應用程式必須通知 Azure 通訊服務 呼叫 SDK,瞭解應用程式可以產生的視訊格式。 這項資訊可讓 Azure 通訊服務呼叫 SDK 為當時的網路條件挑選最佳視訊格式組態。

虛擬視訊

支援的視訊解析度

外觀比例 解決方法 FPS 上限
16x9 1080p 30
16x9 720p 30
16x9 540p 30
16x9 480p 30
16x9 360p 30
16x9 270p 15
16x9 240p 15
16x9 180p 15
4x3 VGA (640x480) 30
4x3 424x320 15
4x3 QVGA (320x240) 15
4x3 212x160 15
  1. 使用 SDK 支援的 VideoStreamPixelFormat 建立 的陣列 VideoFormat

    當有多個格式可用時,清單中的格式順序不會影響或設定使用格式的優先順序。 格式選取的準則是以網路頻寬等外部因素為基礎。

    VideoStreamFormat videoStreamFormat = new VideoStreamFormat();
    videoStreamFormat.setResolution(VideoStreamResolution.P360);
    videoStreamFormat.setPixelFormat(VideoStreamPixelFormat.RGBA);
    videoStreamFormat.setFramesPerSecond(framerate);
    videoStreamFormat.setStride1(w * 4); // It is times 4 because RGBA is a 32-bit format
    
    List<VideoStreamFormat> videoStreamFormats = new ArrayList<>();
    videoStreamFormats.add(videoStreamFormat);
    
  2. 建立 RawOutgoingVideoStreamOptions,並使用先前建立的對象進行設定 Formats

    RawOutgoingVideoStreamOptions rawOutgoingVideoStreamOptions = new RawOutgoingVideoStreamOptions();
    rawOutgoingVideoStreamOptions.setFormats(videoStreamFormats);
    
  3. 使用您先前建立的 實例,RawOutgoingVideoStreamOptions建立的VirtualOutgoingVideoStream實例。

    VirtualOutgoingVideoStream rawOutgoingVideoStream = new VirtualOutgoingVideoStream(rawOutgoingVideoStreamOptions);
    
  4. RawOutgoingVideoStream.addOnFormatChangedListener訂閱委派。 此事件會通知每當 VideoStreamFormat 已從清單上提供的其中一個視訊格式變更時。

    virtualOutgoingVideoStream.addOnFormatChangedListener((VideoStreamFormatChangedEvent args) -> 
    {
        VideoStreamFormat videoStreamFormat = args.Format;
    });
    
  5. 建立下列協助程序類別的實例,以使用 產生隨機 RawVideoFrameVideoStreamPixelFormat.RGBA

    public class VideoFrameSender
    {
        private RawOutgoingVideoStream rawOutgoingVideoStream;
        private Thread frameIteratorThread;
        private Random random = new Random();
        private volatile boolean stopFrameIterator = false;
    
        public VideoFrameSender(RawOutgoingVideoStream rawOutgoingVideoStream)
        {
            this.rawOutgoingVideoStream = rawOutgoingVideoStream;
        }
    
        public void VideoFrameIterator()
        {
            while (!stopFrameIterator)
            {
                if (rawOutgoingVideoStream != null && 
                    rawOutgoingVideoStream.getFormat() != null && 
                    rawOutgoingVideoStream.getState() == VideoStreamState.STARTED)
                {
                    SendRandomVideoFrameRGBA();
                }
            }
        }
    
        private void SendRandomVideoFrameRGBA()
        {
            int rgbaCapacity = rawOutgoingVideoStream.getFormat().getWidth() * rawOutgoingVideoStream.getFormat().getHeight() * 4;
    
            RawVideoFrame videoFrame = GenerateRandomVideoFrameBuffer(rawOutgoingVideoStream.getFormat(), rgbaCapacity);
    
            try
            {
                rawOutgoingVideoStream.sendRawVideoFrame(videoFrame).get();
    
                int delayBetweenFrames = (int)(1000.0 / rawOutgoingVideoStream.getFormat().getFramesPerSecond());
                Thread.sleep(delayBetweenFrames);
            }
            catch (Exception ex)
            {
                String msg = ex.getMessage();
            }
            finally
            {
                videoFrame.close();
            }
        }
    
        private RawVideoFrame GenerateRandomVideoFrameBuffer(VideoStreamFormat videoStreamFormat, int rgbaCapacity)
        {
            ByteBuffer rgbaBuffer = ByteBuffer.allocateDirect(rgbaCapacity); // Only allocateDirect ByteBuffers are allowed
            rgbaBuffer.order(ByteOrder.nativeOrder());
    
            GenerateRandomVideoFrame(rgbaBuffer, rgbaCapacity);
    
            RawVideoFrameBuffer videoFrameBuffer = new RawVideoFrameBuffer();
            videoFrameBuffer.setBuffers(Arrays.asList(rgbaBuffer));
            videoFrameBuffer.setStreamFormat(videoStreamFormat);
    
            return videoFrameBuffer;
        }
    
        private void GenerateRandomVideoFrame(ByteBuffer rgbaBuffer, int rgbaCapacity)
        {
            int w = rawOutgoingVideoStream.getFormat().getWidth();
            int h = rawOutgoingVideoStream.getFormat().getHeight();
    
            byte rVal = (byte)random.nextInt(255);
            byte gVal = (byte)random.nextInt(255);
            byte bVal = (byte)random.nextInt(255);
            byte aVal = (byte)255;
    
            byte[] rgbaArrayBuffer = new byte[rgbaCapacity];
    
            int rgbaStride = w * 4;
    
            for (int y = 0; y < h; y++)
            {
                for (int x = 0; x < rgbaStride; x += 4)
                {
                    rgbaArrayBuffer[(w * 4 * y) + x + 0] = rVal;
                    rgbaArrayBuffer[(w * 4 * y) + x + 1] = gVal;
                    rgbaArrayBuffer[(w * 4 * y) + x + 2] = bVal;
                    rgbaArrayBuffer[(w * 4 * y) + x + 3] = aVal;
                }
            }
    
            rgbaBuffer.put(rgbaArrayBuffer);
            rgbaBuffer.rewind();
        }
    
        public void Start()
        {
            frameIteratorThread = new Thread(this::VideoFrameIterator);
            frameIteratorThread.start();
        }
    
        public void Stop()
        {
            try
            {
                if (frameIteratorThread != null)
                {
                    stopFrameIterator = true;
    
                    frameIteratorThread.join();
                    frameIteratorThread = null;
    
                    stopFrameIterator = false;
                }
            }
            catch (InterruptedException ex)
            {
                String msg = ex.getMessage();
            }
        }
    }
    
  6. VideoStream.addOnStateChangedListener訂閱委派。 此委派會通知目前數據流的狀態。 如果狀態不等於 VideoStreamState.STARTED,請勿傳送畫面格。

    private VideoFrameSender videoFrameSender;
    
    rawOutgoingVideoStream.addOnStateChangedListener((VideoStreamStateChangedEvent args) ->
    {
        CallVideoStream callVideoStream = args.getStream();
    
        switch (callVideoStream.getState())
        {
            case AVAILABLE:
                videoFrameSender = new VideoFrameSender(rawOutgoingVideoStream);
                break;
            case STARTED:
                // Start sending frames
                videoFrameSender.Start();
                break;
            case STOPPED:
                // Stop sending frames
                videoFrameSender.Stop();
                break;
        }
    });
    

ScreenShare 影片

因為 Windows 系統會產生框架,因此您必須實作自己的前景服務,以擷取框架,並使用 Azure 通訊服務 呼叫 API 加以傳送。

支援的視訊解析度

外觀比例 解決方法 FPS 上限
任何項目 任何高達 1080p 的任何專案 30

建立螢幕共用視訊串流的步驟

  1. 使用 SDK 支援的 VideoStreamPixelFormat 建立 的陣列 VideoFormat

    當有多個格式可用時,清單中的格式順序不會影響或設定使用格式的優先順序。 格式選取的準則是以網路頻寬等外部因素為基礎。

    VideoStreamFormat videoStreamFormat = new VideoStreamFormat();
    videoStreamFormat.setWidth(1280); // Width and height can be used for ScreenShareOutgoingVideoStream for custom resolutions or use one of the predefined values inside VideoStreamResolution
    videoStreamFormat.setHeight(720);
    //videoStreamFormat.setResolution(VideoStreamResolution.P360);
    videoStreamFormat.setPixelFormat(VideoStreamPixelFormat.RGBA);
    videoStreamFormat.setFramesPerSecond(framerate);
    videoStreamFormat.setStride1(w * 4); // It is times 4 because RGBA is a 32-bit format
    
    List<VideoStreamFormat> videoStreamFormats = new ArrayList<>();
    videoStreamFormats.add(videoStreamFormat);
    
  2. 建立 RawOutgoingVideoStreamOptions,並使用先前建立的對象進行設定 VideoFormats

    RawOutgoingVideoStreamOptions rawOutgoingVideoStreamOptions = new RawOutgoingVideoStreamOptions();
    rawOutgoingVideoStreamOptions.setFormats(videoStreamFormats);
    
  3. 使用您先前建立的 實例,RawOutgoingVideoStreamOptions建立的VirtualOutgoingVideoStream實例。

    ScreenShareOutgoingVideoStream rawOutgoingVideoStream = new ScreenShareOutgoingVideoStream(rawOutgoingVideoStreamOptions);
    
  4. 以下列方式擷取和傳送視訊畫面。

    private void SendRawVideoFrame()
    {
        ByteBuffer byteBuffer = // Fill it with the content you got from the Windows APIs
        RawVideoFrameBuffer videoFrame = new RawVideoFrameBuffer();
        videoFrame.setBuffers(Arrays.asList(byteBuffer)); // The number of buffers depends on the VideoStreamPixelFormat
        videoFrame.setStreamFormat(rawOutgoingVideoStream.getFormat());
    
        try
        {
            rawOutgoingVideoStream.sendRawVideoFrame(videoFrame).get();
        }
        catch (Exception ex)
        {
            String msg = ex.getMessage();
        }
        finally
        {
            videoFrame.close();
        }
    }
    

原始傳入影片

這項功能可讓您存取物件內的 IncomingVideoStream 視訊畫面,以便在本機操作這些畫面

  1. 透過設定建立的 IncomingVideoOptionsJoinCallOptions 實例 VideoStreamKind.RawIncoming

    IncomingVideoOptions incomingVideoOptions = new IncomingVideoOptions()
            .setStreamType(VideoStreamKind.RAW_INCOMING);
    
    JoinCallOptions joinCallOptions = new JoinCallOptions()
            .setIncomingVideoOptions(incomingVideoOptions);
    
  2. 收到 ParticipantsUpdatedEventArgs 事件附加 RemoteParticipant.VideoStreamStateChanged 委派之後。 這個事件會通知物件的狀態 IncomingVideoStream

    private List<RemoteParticipant> remoteParticipantList;
    
    private void OnRemoteParticipantsUpdated(ParticipantsUpdatedEventArgs args)
    {
        for (RemoteParticipant remoteParticipant : args.getAddedParticipants())
        {
            List<IncomingVideoStream> incomingVideoStreamList = remoteParticipant.getIncomingVideoStreams(); // Check if there are IncomingVideoStreams already before attaching the delegate
            for (IncomingVideoStream incomingVideoStream : incomingVideoStreamList)
            {
                OnRawIncomingVideoStreamStateChanged(incomingVideoStream);
            }
    
            remoteParticipant.addOnVideoStreamStateChanged(this::OnVideoStreamStateChanged);
            remoteParticipantList.add(remoteParticipant); // If the RemoteParticipant ref is not kept alive the VideoStreamStateChanged events are going to be missed
        }
    
        for (RemoteParticipant remoteParticipant : args.getRemovedParticipants())
        {
            remoteParticipant.removeOnVideoStreamStateChanged(this::OnVideoStreamStateChanged);
            remoteParticipantList.remove(remoteParticipant);
        }
    }
    
    private void OnVideoStreamStateChanged(object sender, VideoStreamStateChangedEventArgs args)
    {
        CallVideoStream callVideoStream = args.getStream();
    
        OnRawIncomingVideoStreamStateChanged((RawIncomingVideoStream) callVideoStream);
    }
    
    private void OnRawIncomingVideoStreamStateChanged(RawIncomingVideoStream rawIncomingVideoStream)
    {
        switch (incomingVideoStream.State)
        {
            case AVAILABLE:
                // There is a new IncomingvideoStream
                rawIncomingVideoStream.addOnRawVideoFrameReceived(this::OnVideoFrameReceived);
                rawIncomingVideoStream.Start();
    
                break;
            case STARTED:
                // Will start receiving video frames
                break;
            case STOPPED:
                // Will stop receiving video frames
                break;
            case NOT_AVAILABLE:
                // The IncomingvideoStream should not be used anymore
                rawIncomingVideoStream.removeOnRawVideoFrameReceived(this::OnVideoFrameReceived);
    
                break;
        }
    }
    
  3. 此時,具有VideoStreamState.Available狀態附加RawIncomingVideoStream.RawVideoFrameReceived委派,IncomingVideoStream如上一個步驟所示。 該委派會提供新的 RawVideoFrame 物件。

    private void OnVideoFrameReceived(RawVideoFrameReceivedEventArgs args)
    {
        // Render/Modify/Save the video frame
        RawVideoFrameBuffer videoFrame = (RawVideoFrameBuffer) args.getFrame();
    }
    

在本快速入門中,您將瞭解如何使用適用於 iOS 的 Azure 通訊服務 呼叫 SDK 來實作原始媒體存取。

Azure 通訊服務 通話 SDK 提供 API,可讓應用程式產生自己的視訊畫面,以在通話中傳送給遠端參與者。

本快速入門以 快速入門為基礎:將 1:1 視訊通話新增至適用於 iOS 的應用程式

RawAudio 存取

存取原始音訊媒體可讓您存取來電的音訊串流,以及在通話期間檢視和傳送自定義傳出音訊串流的能力。

傳送原始傳出音訊

建立 options 物件,指定我們想要傳送的原始數據流屬性。

    let outgoingAudioStreamOptions = RawOutgoingAudioStreamOptions()
    let properties = RawOutgoingAudioStreamProperties()
    properties.sampleRate = .hz44100
    properties.bufferDuration = .inMs20
    properties.channelMode = .mono
    properties.format = .pcm16Bit
    outgoingAudioStreamOptions.properties = properties

RawOutgoingAudioStream建立 並附加它以聯結通話選項,且串流會在呼叫連線時自動啟動。

    let options = JoinCallOptions() // or StartCallOptions()

    let outgoingAudioOptions = OutgoingAudioOptions()
    self.rawOutgoingAudioStream = RawOutgoingAudioStream(rawOutgoingAudioStreamOptions: outgoingAudioStreamOptions)
    outgoingAudioOptions.stream = self.rawOutgoingAudioStream
    options.outgoingAudioOptions = outgoingAudioOptions

    // Start or Join call passing the options instance.

將數據流附加至呼叫

或者,您也可以改為將數據流附加至現有的 Call 實例:


    call.startAudio(stream: self.rawOutgoingAudioStream) { error in 
        // Stream attached to `Call`.
    }

開始傳送原始範例

一旦數據流狀態為 AudioStreamState.started,我們才能開始傳送數據。 為了觀察音訊串流狀態變更,我們會實作 RawOutgoingAudioStreamDelegate。 並將它設定為數據流委派。

    func rawOutgoingAudioStream(_ rawOutgoingAudioStream: RawOutgoingAudioStream,
                                didChangeState args: AudioStreamStateChangedEventArgs) {
        // When value is `AudioStreamState.started` we will be able to send audio samples.
    }

    self.rawOutgoingAudioStream.delegate = DelegateImplementer()

或使用關閉型

    self.rawOutgoingAudioStream.events.onStateChanged = { args in
        // When value is `AudioStreamState.started` we will be able to send audio samples.
    }

當串流啟動時,我們可以開始將音訊範例傳送 AVAudioPCMBuffer 至通話。

音訊緩衝區格式應該符合指定的數據流屬性。

    protocol SamplesProducer {
        func produceSample(_ currentSample: Int, 
                           options: RawOutgoingAudioStreamOptions) -> AVAudioPCMBuffer
    }

    // Let's use a simple Tone data producer as example.
    // Producing PCM buffers.
    func produceSamples(_ currentSample: Int,
                        stream: RawOutgoingAudioStream,
                        options: RawOutgoingAudioStreamOptions) -> AVAudioPCMBuffer {
        let sampleRate = options.properties.sampleRate
        let channelMode = options.properties.channelMode
        let bufferDuration = options.properties.bufferDuration
        let numberOfChunks = UInt32(1000 / bufferDuration.value)
        let bufferFrameSize = UInt32(sampleRate.valueInHz) / numberOfChunks
        let frequency = 400

        guard let format = AVAudioFormat(commonFormat: .pcmFormatInt16,
                                         sampleRate: sampleRate.valueInHz,
                                         channels: channelMode.channelCount,
                                         interleaved: channelMode == .stereo) else {
            fatalError("Failed to create PCM Format")
        }

        guard let buffer = AVAudioPCMBuffer(pcmFormat: format, frameCapacity: bufferFrameSize) else {
            fatalError("Failed to create PCM buffer")
        }

        buffer.frameLength = bufferFrameSize

        let factor: Double = ((2 as Double) * Double.pi) / (sampleRate.valueInHz/Double(frequency))
        var interval = 0
        for sampleIdx in 0..<Int(buffer.frameCapacity * channelMode.channelCount) {
            let sample = sin(factor * Double(currentSample + interval))
            // Scale to maximum amplitude. Int16.max is 37,767.
            let value = Int16(sample * Double(Int16.max))
            
            guard let underlyingByteBuffer = buffer.mutableAudioBufferList.pointee.mBuffers.mData else {
                continue
            }
            underlyingByteBuffer.assumingMemoryBound(to: Int16.self).advanced(by: sampleIdx).pointee = value
            interval += channelMode == .mono ? 2 : 1
        }

        return buffer
    }

    final class RawOutgoingAudioSender {
        let stream: RawOutgoingAudioStream
        let options: RawOutgoingAudioStreamOptions
        let producer: SamplesProducer

        private var timer: Timer?
        private var currentSample: Int = 0
        private var currentTimestamp: Int64 = 0

        init(stream: RawOutgoingAudioStream,
             options: RawOutgoingAudioStreamOptions,
             producer: SamplesProducer) {
            self.stream = stream
            self.options = options
            self.producer = producer
        }

        func start() {
            let properties = self.options.properties
            let interval = properties.bufferDuration.timeInterval

            let channelCount = AVAudioChannelCount(properties.channelMode.channelCount)
            let format = AVAudioFormat(commonFormat: .pcmFormatInt16,
                                       sampleRate: properties.sampleRate.valueInHz,
                                       channels: channelCount,
                                       interleaved: channelCount > 1)!
            self.timer = Timer.scheduledTimer(withTimeInterval: interval, repeats: true) { [weak self] _ in
                guard let self = self else { return }
                let sample = self.producer.produceSamples(self.currentSample, options: self.options)
                let rawBuffer = RawAudioBuffer()
                rawBuffer.buffer = sample
                rawBuffer.timestampInTicks = self.currentTimestamp
                self.stream.send(buffer: rawBuffer, completionHandler: { error in
                    if let error = error {
                        // Handle possible error.
                    }
                })

                self.currentTimestamp += Int64(properties.bufferDuration.value)
                self.currentSample += 1
            }
        }

        func stop() {
            self.timer?.invalidate()
            self.timer = nil
        }

        deinit {
            stop()
        }
    }

請務必記得停止目前呼叫 Call 實例中的音訊數據流:


    call.stopAudio(stream: self.rawOutgoingAudioStream) { error in 
        // Stream detached from `Call` and stopped.
    }

擷取麥克風範例

使用 Apple, AVAudioEngine 我們可以藉由點選音訊引擎 輸入節點來擷取麥克風畫面。 並擷取麥克風數據,並能夠使用原始音訊功能,我們能夠在將音訊傳送至通話之前處理音訊。

    import AVFoundation
    import AzureCommunicationCalling

    enum MicrophoneSenderError: Error {
        case notMatchingFormat
    }

    final class MicrophoneDataSender {
        private let stream: RawOutgoingAudioStream
        private let properties: RawOutgoingAudioStreamProperties
        private let format: AVAudioFormat
        private let audioEngine: AVAudioEngine = AVAudioEngine()

        init(properties: RawOutgoingAudioStreamProperties) throws {
            // This can be different depending on which device we are running or value set for
            // `try AVAudioSession.sharedInstance().setPreferredSampleRate(...)`.
            let nodeFormat = self.audioEngine.inputNode.outputFormat(forBus: 0)
            let matchingSampleRate = AudioSampleRate.allCases.first(where: { $0.valueInHz == nodeFormat.sampleRate })
            guard let inputNodeSampleRate = matchingSampleRate else {
                throw MicrophoneSenderError.notMatchingFormat
            }

            // Override the sample rate to one that matches audio session (Audio engine input node frequency).
            properties.sampleRate = inputNodeSampleRate

            let options = RawOutgoingAudioStreamOptions()
            options.properties = properties

            self.stream = RawOutgoingAudioStream(rawOutgoingAudioStreamOptions: options)
            let channelCount = AVAudioChannelCount(properties.channelMode.channelCount)
            self.format = AVAudioFormat(commonFormat: .pcmFormatInt16,
                                        sampleRate: properties.sampleRate.valueInHz,
                                        channels: channelCount,
                                        interleaved: channelCount > 1)!
            self.properties = properties
        }

        func start() throws {
            guard !self.audioEngine.isRunning else {
                return
            }

            // Install tap documentations states that we can get between 100 and 400 ms of data.
            let framesFor100ms = AVAudioFrameCount(self.format.sampleRate * 0.1)

            // Note that some formats may not be allowed by `installTap`, so we have to specify the 
            // correct properties.
            self.audioEngine.inputNode.installTap(onBus: 0, bufferSize: framesFor100ms, 
                                                  format: self.format) { [weak self] buffer, _ in
                guard let self = self else { return }
                
                let rawBuffer = RawAudioBuffer()
                rawBuffer.buffer = buffer
                // Although we specified either 10ms or 20ms, we allow sending up to 100ms of data
                // as long as it can be evenly divided by the specified size.
                self.stream.send(buffer: rawBuffer) { error in
                    if let error = error {
                        // Handle error
                    }
                }
            }

            try audioEngine.start()
        }

        func stop() {
            audioEngine.stop()
        }
    }

注意

音訊引擎 輸入節點 的取樣率預設為 >共用音訊會話慣用取樣率的值。 因此,我們無法使用不同的值,在該節點中安裝點選。 因此,我們必須確保 RawOutgoingStream 屬性取樣率符合我們從點選到麥克風範例中取得的屬性,或將點選緩衝區轉換成符合傳出數據流預期之格式。

透過這個小型範例,我們已瞭解如何擷取麥克風 AVAudioEngine 數據,並使用原始傳出音訊功能將這些範例傳送至通話。

接收原始傳入音訊

我們也可以接收通話音訊串流範例,就像我們想要在播放前處理音訊一樣 AVAudioPCMBuffer

建立 RawIncomingAudioStreamOptions 物件,指定我們想要接收的原始數據流屬性。

    let options = RawIncomingAudioStreamOptions()
    let properties = RawIncomingAudioStreamProperties()
    properties.format = .pcm16Bit
    properties.sampleRate = .hz44100
    properties.channelMode = .stereo
    options.properties = properties

建立 RawOutgoingAudioStream 並附加它以加入通話選項

    let options =  JoinCallOptions() // or StartCallOptions()
    let incomingAudioOptions = IncomingAudioOptions()

    self.rawIncomingStream = RawIncomingAudioStream(rawIncomingAudioStreamOptions: audioStreamOptions)
    incomingAudioOptions.stream = self.rawIncomingStream
    options.incomingAudioOptions = incomingAudioOptions

或者,我們也可以改為將數據流附加至現有的 Call 實例:


    call.startAudio(stream: self.rawIncomingStream) { error in 
        // Stream attached to `Call`.
    }

若要開始接收來自傳入資料串流的原始音訊緩衝區 , RawIncomingAudioStreamDelegate實作 :

    class RawIncomingReceiver: NSObject, RawIncomingAudioStreamDelegate {
        func rawIncomingAudioStream(_ rawIncomingAudioStream: RawIncomingAudioStream,
                                    didChangeState args: AudioStreamStateChangedEventArgs) {
            // To be notified when stream started and stopped.
        }
        
        func rawIncomingAudioStream(_ rawIncomingAudioStream: RawIncomingAudioStream,
                                    mixedAudioBufferReceived args: IncomingMixedAudioEventArgs) {
            // Receive raw audio buffers(AVAudioPCMBuffer) and process using AVAudioEngine API's.
        }
    }

    self.rawIncomingStream.delegate = RawIncomingReceiver()

    rawIncomingAudioStream.events.mixedAudioBufferReceived = { args in
        // Receive raw audio buffers(AVAudioPCMBuffer) and process them using AVAudioEngine API's.
    }

    rawIncomingAudioStream.events.onStateChanged = { args in
        // To be notified when stream started and stopped.
    }

RawVideo 存取

因為應用程式會產生視訊畫面,因此應用程式必須通知 Azure 通訊服務 呼叫 SDK,瞭解應用程式可以產生的視訊格式。 這項資訊可讓 Azure 通訊服務 通話 SDK 挑選當時網路狀況的最佳視訊格式組態。

虛擬視訊

支援的視訊解析度

外觀比例 解決方法 FPS 上限
16x9 1080p 30
16x9 720p 30
16x9 540p 30
16x9 480p 30
16x9 360p 30
16x9 270p 15
16x9 240p 15
16x9 180p 15
4x3 VGA (640x480) 30
4x3 424x320 15
4x3 QVGA (320x240) 15
4x3 212x160 15
  1. 使用 SDK 支援的 VideoStreamPixelFormat 建立 的陣列 VideoFormat 。 當有多個格式可用時,清單中的格式順序不會影響或設定使用格式的優先順序。 格式選取的準則是以網路頻寬等外部因素為基礎。

    var videoStreamFormat = VideoStreamFormat()
    videoStreamFormat.resolution = VideoStreamResolution.p360
    videoStreamFormat.pixelFormat = VideoStreamPixelFormat.nv12
    videoStreamFormat.framesPerSecond = framerate
    videoStreamFormat.stride1 = w // w is the resolution width
    videoStreamFormat.stride2 = w / 2 // w is the resolution width
    
    var videoStreamFormats: [VideoStreamFormat] = [VideoStreamFormat]()
    videoStreamFormats.append(videoStreamFormat)
    
  2. 使用 RawOutgoingVideoStreamOptions先前建立的物件建立 ,並設定格式。

    var rawOutgoingVideoStreamOptions = RawOutgoingVideoStreamOptions()
    rawOutgoingVideoStreamOptions.formats = videoStreamFormats
    
  3. 使用您先前建立的 實例,RawOutgoingVideoStreamOptions建立的VirtualOutgoingVideoStream實例。

    var rawOutgoingVideoStream = VirtualOutgoingVideoStream(videoStreamOptions: rawOutgoingVideoStreamOptions)
    
  4. 實作 VirtualOutgoingVideoStreamDelegate 委派。 didChangeFormat此事件會通知每當 VideoStreamFormat 已從清單上提供的其中一個視訊格式變更時。

    virtualOutgoingVideoStream.delegate = /* Attach delegate and implement didChangeFormat */
    
  5. 建立下列協助程序類別的實例以存取 CVPixelBuffer 數據

    final class BufferExtensions: NSObject {
        public static func getArrayBuffersUnsafe(cvPixelBuffer: CVPixelBuffer) -> Array<UnsafeMutableRawPointer?>
        {
            var bufferArrayList: Array<UnsafeMutableRawPointer?> = [UnsafeMutableRawPointer?]()
    
            let cvStatus: CVReturn = CVPixelBufferLockBaseAddress(cvPixelBuffer, .readOnly)
    
            if cvStatus == kCVReturnSuccess {
                let bufferListSize = CVPixelBufferGetPlaneCount(cvPixelBuffer);
                for i in 0...bufferListSize {
                    let bufferRef = CVPixelBufferGetBaseAddressOfPlane(cvPixelBuffer, i)
                    bufferArrayList.append(bufferRef)
                }
            }
    
            return bufferArrayList
        }
    }
    
  6. 建立下列協助程序類別的實例,以使用 產生隨機 RawVideoFrameBufferVideoStreamPixelFormat.rgba

    final class VideoFrameSender : NSObject
    {
        private var rawOutgoingVideoStream: RawOutgoingVideoStream
        private var frameIteratorThread: Thread
        private var stopFrameIterator: Bool = false
    
        public VideoFrameSender(rawOutgoingVideoStream: RawOutgoingVideoStream)
        {
            self.rawOutgoingVideoStream = rawOutgoingVideoStream
        }
    
        @objc private func VideoFrameIterator()
        {
            while !stopFrameIterator {
                if rawOutgoingVideoStream != nil &&
                   rawOutgoingVideoStream.format != nil &&
                   rawOutgoingVideoStream.state == .started {
                    SendRandomVideoFrameNV12()
                }
           }
        }
    
        public func SendRandomVideoFrameNV12() -> Void
        {
            let videoFrameBuffer = GenerateRandomVideoFrameBuffer()
    
            rawOutgoingVideoStream.send(frame: videoFrameBuffer) { error in
                /*Handle error if non-nil*/
            }
    
            let rate = 0.1 / rawOutgoingVideoStream.format.framesPerSecond
            let second: Float = 1000000
            usleep(useconds_t(rate * second))
        }
    
        private func GenerateRandomVideoFrameBuffer() -> RawVideoFrame
        {
            var cvPixelBuffer: CVPixelBuffer? = nil
            guard CVPixelBufferCreate(kCFAllocatorDefault,
                                    rawOutgoingVideoStream.format.width,
                                    rawOutgoingVideoStream.format.height,
                                    kCVPixelFormatType_420YpCbCr8BiPlanarFullRange,
                                    nil,
                                    &cvPixelBuffer) == kCVReturnSuccess else {
                fatalError()
            }
    
            GenerateRandomVideoFrameNV12(cvPixelBuffer: cvPixelBuffer!)
    
            CVPixelBufferUnlockBaseAddress(cvPixelBuffer!, .readOnly)
    
            let videoFrameBuffer = RawVideoFrameBuffer()
            videoFrameBuffer.buffer = cvPixelBuffer!
            videoFrameBuffer.streamFormat = rawOutgoingVideoStream.format
    
            return videoFrameBuffer
        }
    
       private func GenerateRandomVideoFrameNV12(cvPixelBuffer: CVPixelBuffer) {
            let w = rawOutgoingVideoStream.format.width
            let h = rawOutgoingVideoStream.format.height
    
            let bufferArrayList = BufferExtensions.getArrayBuffersUnsafe(cvPixelBuffer: cvPixelBuffer)
    
            guard bufferArrayList.count >= 2, let yArrayBuffer = bufferArrayList[0], let uvArrayBuffer = bufferArrayList[1] else {
                return
            }
    
            let yVal = Int32.random(in: 1..<255)
            let uvVal = Int32.random(in: 1..<255)
    
            for y in 0...h
            {
                for x in 0...w
                {
                    yArrayBuffer.storeBytes(of: yVal, toByteOffset: Int((y * w) + x), as: Int32.self)
                }
            }
    
            for y in 0...(h/2)
            {
                for x in 0...(w/2)
                {
                    uvArrayBuffer.storeBytes(of: uvVal, toByteOffset: Int((y * w) + x), as: Int32.self)
                }
            }
        }
    
        public func Start() {
            stopFrameIterator = false
            frameIteratorThread = Thread(target: self, selector: #selector(VideoFrameIterator), object: "VideoFrameSender")
            frameIteratorThread?.start()
        }
    
        public func Stop() {
            if frameIteratorThread != nil {
                stopFrameIterator = true
                frameIteratorThread?.cancel()
                frameIteratorThread = nil
            }
        }
    }
    
  7. 實作 VirtualOutgoingVideoStreamDelegate至 。 事件 didChangeState 會通知目前數據流的狀態。 如果狀態不等於 VideoStreamState.started,請勿傳送畫面格。

    /*Delegate Implementer*/ 
    private var videoFrameSender: VideoFrameSender
    func virtualOutgoingVideoStream(
        _ virtualOutgoingVideoStream: VirtualOutgoingVideoStream,
        didChangeState args: VideoStreamStateChangedEventArgs) {
        switch args.stream.state {
            case .available:
                videoFrameSender = VideoFrameSender(rawOutgoingVideoStream)
                break
            case .started:
                /* Start sending frames */
                videoFrameSender.Start()
                break
            case .stopped:
                /* Stop sending frames */
                videoFrameSender.Stop()
                break
        }
    }
    

ScreenShare 影片

因為 Windows 系統會產生框架,因此您必須實作自己的前景服務,以擷取框架,並使用 Azure 通訊服務 呼叫 API 加以傳送。

支援的視訊解析度

外觀比例 解決方法 FPS 上限
任何項目 任何高達 1080p 的任何專案 30

建立螢幕共用視訊串流的步驟

  1. 使用 SDK 支援的 VideoStreamPixelFormat 建立 的陣列 VideoFormat 。 當有多個格式可用時,清單中的格式順序不會影響或設定使用格式的優先順序。 格式選取的準則是以網路頻寬等外部因素為基礎。

    let videoStreamFormat = VideoStreamFormat()
    videoStreamFormat.width = 1280 /* Width and height can be used for ScreenShareOutgoingVideoStream for custom resolutions or use one of the predefined values inside VideoStreamResolution */
    videoStreamFormat.height = 720
    /*videoStreamFormat.resolution = VideoStreamResolution.p360*/
    videoStreamFormat.pixelFormat = VideoStreamPixelFormat.rgba
    videoStreamFormat.framesPerSecond = framerate
    videoStreamFormat.stride1 = w * 4 /* It is times 4 because RGBA is a 32-bit format */
    
    var videoStreamFormats: [VideoStreamFormat] = []
    videoStreamFormats.append(videoStreamFormat)
    
  2. 建立 RawOutgoingVideoStreamOptions,並使用先前建立的對象進行設定 VideoFormats

    var rawOutgoingVideoStreamOptions = RawOutgoingVideoStreamOptions()
    rawOutgoingVideoStreamOptions.formats = videoStreamFormats
    
  3. 使用您先前建立的 實例,RawOutgoingVideoStreamOptions建立的VirtualOutgoingVideoStream實例。

    var rawOutgoingVideoStream = ScreenShareOutgoingVideoStream(rawOutgoingVideoStreamOptions)
    
  4. 以下列方式擷取和傳送視訊畫面。

    private func SendRawVideoFrame() -> Void
    {
        CVPixelBuffer cvPixelBuffer = /* Fill it with the content you got from the Windows APIs, The number of buffers depends on the VideoStreamPixelFormat */
        let videoFrameBuffer = RawVideoFrameBuffer()
        videoFrameBuffer.buffer = cvPixelBuffer!
        videoFrameBuffer.streamFormat = rawOutgoingVideoStream.format
    
        rawOutgoingVideoStream.send(frame: videoFrame) { error in
            /*Handle error if not nil*/
        }
    }
    

原始傳入影片

這項功能可讓您存取 內的 IncomingVideoStream視訊畫面,以便在本機操作這些串流物件

  1. 透過設定建立的 IncomingVideoOptionsJoinCallOptions 實例 VideoStreamKind.RawIncoming

    var incomingVideoOptions = IncomingVideoOptions()
    incomingVideoOptions.streamType = VideoStreamKind.rawIncoming
    var joinCallOptions = JoinCallOptions()
    joinCallOptions.incomingVideoOptions = incomingVideoOptions
    
  2. 收到 ParticipantsUpdatedEventArgs 事件附加 RemoteParticipant.delegate.didChangedVideoStreamState 委派之後。 這個事件會通知物件的狀態 IncomingVideoStream

    private var remoteParticipantList: [RemoteParticipant] = []
    
    func call(_ call: Call, didUpdateRemoteParticipant args: ParticipantsUpdatedEventArgs) {
        args.addedParticipants.forEach { remoteParticipant in
            remoteParticipant.incomingVideoStreams.forEach { incomingVideoStream in
                OnRawIncomingVideoStreamStateChanged(incomingVideoStream: incomingVideoStream)
            }
            remoteParticipant.delegate = /* Attach delegate OnVideoStreamStateChanged*/
        }
    
        args.removedParticipants.forEach { remoteParticipant in
            remoteParticipant.delegate = nil
        }
    }
    
    func remoteParticipant(_ remoteParticipant: RemoteParticipant, 
                           didVideoStreamStateChanged args: VideoStreamStateChangedEventArgs) {
        OnRawIncomingVideoStreamStateChanged(rawIncomingVideoStream: args.stream)
    }
    
    func OnRawIncomingVideoStreamStateChanged(rawIncomingVideoStream: RawIncomingVideoStream) {
        switch incomingVideoStream.state {
            case .available:
                /* There is a new IncomingVideoStream */
                rawIncomingVideoStream.delegate /* Attach delegate OnVideoFrameReceived*/
                rawIncomingVideoStream.start()
                break;
            case .started:
                /* Will start receiving video frames */
                break
            case .stopped:
                /* Will stop receiving video frames */
                break
            case .notAvailable:
                /* The IncomingVideoStream should not be used anymore */
                rawIncomingVideoStream.delegate = nil
                break
        }
    }
    
  3. 此時,具有VideoStreamState.available狀態附加RawIncomingVideoStream.delegate.didReceivedRawVideoFrame委派,IncomingVideoStream如上一個步驟所示。 該事件會提供新的 RawVideoFrame 物件。

    func rawIncomingVideoStream(_ rawIncomingVideoStream: RawIncomingVideoStream, 
                                didRawVideoFrameReceived args: RawVideoFrameReceivedEventArgs) {
        /* Render/Modify/Save the video frame */
        let videoFrame = args.frame as! RawVideoFrameBuffer
    }
    

身為開發人員,您可以在通話期間存取傳入和傳出音訊、視訊和螢幕共用內容的原始媒體,以便擷取、分析及處理音訊/視訊內容。 存取 Azure 通訊服務 用戶端原始音訊、原始視訊和原始螢幕共用,讓開發人員幾乎無限制地能夠檢視和編輯 Azure 通訊服務 呼叫 SDK 內發生的音訊、視訊和螢幕共享內容。 在本快速入門中,您將瞭解如何使用適用於 JavaScript 的 Azure 通訊服務 呼叫 SDK 來實作原始媒體存取。

例如,

  • 您可以直接在通話物件上存取通話的音訊/視訊串流,並在通話期間傳送自定義的傳出音訊/視訊串流。
  • 您可以檢查音訊和視訊串流,以執行自定義 AI 模型進行分析。 這類模型可能包含自然語言處理來分析交談,或提供即時深入解析和建議來提升代理程序生產力。
  • 組織可以使用音訊和視訊媒體串流,在為患者提供虛擬護理時分析情感,或在使用混合實境的視訊通話期間提供遠端協助。 這項功能為開發人員開闢了一條路徑,可套用創新來增強互動體驗。

必要條件

重要

以下是 適用於 JavaScript 的呼叫 SDK 1.13.1 中的範例。 嘗試本快速入門時,請務必使用該版本或更新版本。

存取原始音訊

存取原始音訊媒體可讓您存取來電的音訊串流,以及在通話期間檢視和傳送自定義傳出音訊串流的能力。

存取傳入的原始音訊串流

使用下列程式代碼來存取來電的音訊數據流。

const userId = 'acs_user_id';
const call = callAgent.startCall(userId);
const callStateChangedHandler = async () => {
    if (call.state === "Connected") {
        const remoteAudioStream = call.remoteAudioStreams[0];
        const mediaStream = await remoteAudioStream.getMediaStream();
	// process the incoming call's audio media stream track
    }
};

callStateChangedHandler();
call.on("stateChanged", callStateChangedHandler);

使用自定義音訊串流來撥打電話

使用下列程式代碼以自定義音訊串流啟動通話,而不是使用使用者的麥克風裝置。

const createBeepAudioStreamToSend = () => {
    const context = new AudioContext();
    const dest = context.createMediaStreamDestination();
    const os = context.createOscillator();
    os.type = 'sine';
    os.frequency.value = 500;
    os.connect(dest);
    os.start();
    const { stream } = dest;
    return stream;
};

...
const userId = 'acs_user_id';
const mediaStream = createBeepAudioStreamToSend();
const localAudioStream = new LocalAudioStream(mediaStream);
const callOptions = {
    audioOptions: {
        localAudioStreams: [localAudioStream]
    }
};
callAgent.startCall(userId, callOptions);

在通話期間切換至自定義音訊串流

使用下列程式代碼將輸入設備切換至自定義音訊串流,而不是在通話期間使用使用者的麥克風裝置。

const createBeepAudioStreamToSend = () => {
    const context = new AudioContext();
    const dest = context.createMediaStreamDestination();
    const os = context.createOscillator();
    os.type = 'sine';
    os.frequency.value = 500;
    os.connect(dest);
    os.start();
    const { stream } = dest;
    return stream;
};

...

const userId = 'acs_user_id';
const mediaStream = createBeepAudioStreamToSend();
const localAudioStream = new LocalAudioStream(mediaStream);
const call = callAgent.startCall(userId);
const callStateChangedHandler = async () => {
    if (call.state === 'Connected') {
        await call.startAudio(localAudioStream);
    }
};

callStateChangedHandler();
call.on('stateChanged', callStateChangedHandler);

停止自定義音訊串流

在呼叫期間設定自定義音訊數據流之後,請使用下列程式代碼停止傳送自定義音訊數據流。

call.stopAudio();

存取原始影片

原始視訊媒體會提供對象的實例 MediaStream 。 (如需詳細資訊,請參閱 JavaScript 檔。)原始視訊媒體會特別 MediaStream 提供對象來電和傳出呼叫的存取權。 針對原始影片,您可以使用該對象來套用篩選,方法是使用機器學習來處理影片的畫面。

處理的原始傳出視訊畫面可以傳送為寄件人的傳出視訊。 處理的原始傳入視訊畫面可以在接收端轉譯。

使用自定義視訊串流撥打電話

您可以存取撥出通話的原始視訊串流。 MediaStream您可以使用 機器學習和套用篩選,將輸出未經處理的視訊串流用於處理畫面。 然後,處理過的傳出視訊可以傳送為傳送者視訊串流。

本範例會將畫布數據以傳出視訊的形式傳送給使用者。

const createVideoMediaStreamToSend = () => {
    const canvas = document.createElement('canvas');
    const ctx = canvas.getContext('2d');
    canvas.width = 1500;
    canvas.height = 845;
    ctx.fillStyle = 'blue';
    ctx.fillRect(0, 0, canvas.width, canvas.height);

    const colors = ['red', 'yellow', 'green'];
    window.setInterval(() => {
        if (ctx) {
            ctx.fillStyle = colors[Math.floor(Math.random() * colors.length)];
            const x = Math.floor(Math.random() * canvas.width);
            const y = Math.floor(Math.random() * canvas.height);
            const size = 100;
            ctx.fillRect(x, y, size, size);
        }
    }, 1000 / 30);

    return canvas.captureStream(30);
};

...
const userId = 'acs_user_id';
const mediaStream = createVideoMediaStreamToSend();
const localVideoStream = new LocalVideoStream(mediaStream);
const callOptions = {
    videoOptions: {
        localVideoStreams: [localVideoStream]
    }
};
callAgent.startCall(userId, callOptions);

在通話期間切換至自定義視訊串流

使用下列程式代碼將輸入設備切換至自定義視訊串流,而不是在通話期間使用使用者的相機裝置。

const createVideoMediaStreamToSend = () => {
    const canvas = document.createElement('canvas');
    const ctx = canvas.getContext('2d');
    canvas.width = 1500;
    canvas.height = 845;
    ctx.fillStyle = 'blue';
    ctx.fillRect(0, 0, canvas.width, canvas.height);

    const colors = ['red', 'yellow', 'green'];
    window.setInterval(() => {
        if (ctx) {
            ctx.fillStyle = colors[Math.floor(Math.random() * colors.length)];
            const x = Math.floor(Math.random() * canvas.width);
            const y = Math.floor(Math.random() * canvas.height);
            const size = 100;
            ctx.fillRect(x, y, size, size);
	 }
    }, 1000 / 30);

    return canvas.captureStream(30);
};

...

const userId = 'acs_user_id';
const call = callAgent.startCall(userId);
const callStateChangedHandler = async () => {
    if (call.state === 'Connected') {    	
        const mediaStream = createVideoMediaStreamToSend();
        const localVideoStream = this.call.localVideoStreams.find((stream) => { return stream.mediaStreamType === 'Video' });
        await localVideoStream.setMediaStream(mediaStream);
    }
};

callStateChangedHandler();
call.on('stateChanged', callStateChangedHandler);

停止自定義視訊串流

在呼叫期間設定自定義視訊串流之後,請使用下列程式代碼停止傳送自定義視訊串流。

// Stop video by passing the same `localVideoStream` instance that was used to start video
await call.stopVideo(localVideoStream);

從套用自定義效果的相機切換至另一個相機裝置時,請先停止視訊、將來源 LocalVideoStream切換至 ,然後再次啟動視訊。

const cameras = await this.deviceManager.getCameras();
const newCameraDeviceInfo = cameras.find(cameraDeviceInfo => { return cameraDeviceInfo.id === '<another camera that you want to switch to>' });
// If current camera is using custom raw media stream and video is on
if (this.localVideoStream.mediaStreamType === 'RawMedia' && this.state.videoOn) {
	// Stop raw custom video first
	await this.call.stopVideo(this.localVideoStream);
	// Switch the local video stream's source to the new camera to use
	this.localVideoStream?.switchSource(newCameraDeviceInfo);
	// Start video with the new camera device
	await this.call.startVideo(this.localVideoStream);

// Else if current camera is using normal stream from camera device and video is on
} else if (this.localVideoStream.mediaStreamType === 'Video' && this.state.videoOn) {
	// You can just switch the source, no need to stop and start again. Sent video will automatically switch to the new camera to use
	this.localVideoStream?.switchSource(newCameraDeviceInfo);
}

從遠端參與者存取傳入視訊串流

您可以存取來電的原始視訊串流。 MediaStream您可以使用 機器學習和套用篩選,將傳入未經處理的視訊串流用於處理畫面。 處理過的傳入視訊接著可以在接收端轉譯。

const remoteVideoStream = remoteParticipants[0].videoStreams.find((stream) => { return stream.mediaStreamType === 'Video'});
const processMediaStream = async () => {
    if (remoteVideoStream.isAvailable) {
	// remote video stream is turned on, process the video's raw media stream.
	const mediaStream = await remoteVideoStream.getMediaStream();
    } else {
	// remote video stream is turned off, handle it
    }
};

remoteVideoStream.on('isAvailableChanged', async () => {
    await processMediaStream();
});

await processMediaStream();

重要

Azure 通訊服務 這項功能目前處於預覽狀態。

預覽 API 和 SDK 在沒有服務等級協議的情況下提供。 我們建議您不要將它們用於生產工作負載。 某些功能可能不受支援,或可能具有限制的功能。

如需詳細資訊,請檢閱 Microsoft Azure 預覽版的補充使用規定。

原始螢幕共用存取處於公開預覽狀態,並可在 1.15.1-beta.1+ 版中取得。

存取原始螢幕共用

原始螢幕共用媒體會特別 MediaStream 提供物件存取權,以供傳入和傳出螢幕共用串流使用。 針對原始螢幕共用,您可以使用該物件來套用篩選,方法是使用機器學習來處理螢幕共用的畫面框架。

處理的原始畫面共享畫面格可以傳送為寄件人的傳出畫面共用。 處理的原始內送畫面共享畫面格可以在接收端轉譯。

使用自訂螢幕共用串流啟動螢幕共用

const createVideoMediaStreamToSend = () => {
    const canvas = document.createElement('canvas');
    const ctx = canvas.getContext('2d');
    canvas.width = 1500;
    canvas.height = 845;
    ctx.fillStyle = 'blue';
    ctx.fillRect(0, 0, canvas.width, canvas.height);

    const colors = ['red', 'yellow', 'green'];
    window.setInterval(() => {
        if (ctx) {
            ctx.fillStyle = colors[Math.floor(Math.random() * colors.length)];
            const x = Math.floor(Math.random() * canvas.width);
            const y = Math.floor(Math.random() * canvas.height);
            const size = 100;
            ctx.fillRect(x, y, size, size);
        }
    }, 1000 / 30);

    return canvas.captureStream(30);
};

...
const mediaStream = createVideoMediaStreamToSend();
const localScreenSharingStream = new LocalVideoStream(mediaStream);
// Will start screen sharing with custom raw media stream
await call.startScreenSharing(localScreenSharingStream);
console.log(localScreenSharingStream.mediaStreamType) // 'RawMedia'

從畫面、瀏覽器索引標籤或應用程式存取原始畫面共用串流,並將效果套用至數據流

以下範例說明如何從螢幕、瀏覽器索引標籤或應用程式套用原始螢幕共用串流上的黑白效果。 注意:Safari 不支援畫布內容篩選 = “grayscale(1)” API。

let bwTimeout;
let bwVideoElem;

const applyBlackAndWhiteEffect = function (stream) {
	let width = 1280, height = 720;
	bwVideoElem = document.createElement("video");
	bwVideoElem.srcObject = stream;
	bwVideoElem.height = height;
	bwVideoElem.width = width;
	bwVideoElem.play();
	const canvas = document.createElement('canvas');
	const bwCtx = canvas.getContext('2d', { willReadFrequently: true });
	canvas.width = width;
	canvas.height = height;
	
	const FPS = 30;
	const processVideo = function () {
	    try {
		let begin = Date.now();
		// start processing.
		// NOTE: The Canvas context filter API is not supported in Safari
		bwCtx.filter = "grayscale(1)";
		bwCtx.drawImage(bwVideoElem, 0, 0, width, height);
		const imageData = bwCtx.getImageData(0, 0, width, height);
		bwCtx.putImageData(imageData, 0, 0);
		// schedule the next one.
		let delay = Math.abs(1000/FPS - (Date.now() - begin));
		bwTimeout = setTimeout(processVideo, delay);
	    } catch (err) {
		console.error(err);
	    }
	}
	
	// schedule the first one.
	bwTimeout = setTimeout(processVideo, 0);
	return canvas.captureStream(FPS);
}

// Call startScreenSharing API without passing any stream parameter. Browser will prompt the user to select the screen, browser tab, or app to share in the call.
await call.startScreenSharing();
const localScreenSharingStream = call.localVideoStreams.find( (stream) => { return stream.mediaStreamType === 'ScreenSharing' });
console.log(localScreenSharingStream.mediaStreamType); // 'ScreenSharing'
// Get the raw media stream from the screen, browser tab, or application
const rawMediaStream = await localScreenSharingStream.getMediaStream();
// Apply effects to the media stream as you wish
const blackAndWhiteMediaStream = applyBlackAndWhiteEffect(rawMediaStream);
// Set the media stream with effects no the local screen sharing stream
await localScreenSharingStream.setMediaStream(blackAndWhiteMediaStream);

// Stop screen sharing and clean up the black and white video filter
await call.stopScreenSharing();
clearTimeout(bwTimeout);
bwVideoElem.srcObject.getVideoTracks().forEach((track) => { track.stop(); });
bwVideoElem.srcObject = null;

停止傳送畫面共用串流

使用下列程式代碼,在呼叫期間設定自定義螢幕共用串流之後,停止傳送自定義螢幕共用串流。

// Stop sending raw screen sharing stream
await call.stopScreenSharing(localScreenSharingStream);

從遠端參與者存取傳入的螢幕共用串流

您可以從遠端參與者存取原始螢幕共用串流。 您用於 MediaStream 傳入的原始螢幕共用串流,以使用機器學習處理畫面,並套用篩選。 處理過的傳入畫面共享數據流接著可以在接收端轉譯。

const remoteScreenSharingStream = remoteParticipants[0].videoStreams.find((stream) => { return stream.mediaStreamType === 'ScreenSharing'});
const processMediaStream = async () => {
    if (remoteScreenSharingStream.isAvailable) {
	// remote screen sharing stream is turned on, process the stream's raw media stream.
	const mediaStream = await remoteScreenSharingStream.getMediaStream();
    } else {
	// remote video stream is turned off, handle it
    }
};

remoteScreenSharingStream.on('isAvailableChanged', async () => {
    await processMediaStream();
});

await processMediaStream();

下一步

如需詳細資訊,請參閱下列文章: