Walkthrough: Using Microsoft Media Foundation for Windows Phone 8
[ This article is for Windows Phone 8 developers. If you’re developing for Windows 10, see the latest documentation. ]
Microsoft Media Foundation (MF) is a framework for audio and video capture and playback for the desktop. Microsoft Media Foundation for Windows Phone is a reimplementation of a subset of the MF APIs. With this feature, Windows Phone 8 apps can implement the following scenarios.
Render video to textures in apps that use native code.
Display cinematics for games.
Play in-game background audio in apps that use native code, such as a game soundtrack.
Tip
This feature is not intended for background streaming scenarios such as internet radio stations or music player apps. Audio played using MF plays only while the app is in the foreground. For info about creating apps for background streaming scenarios, see How to play background audio for Windows Phone 8.
Supported APIs
Windows Phone 8 supports a subset of APIs supported on Windows 8. For a list of Media Foundation APIs supported on Windows Phone, see Supported Microsoft Media Foundation APIs for Windows Phone 8.
Walkthrough: Using MF to render video in a Direct3D app
In this topic we walk you through the process of creating an app that uses MF to render a video to a texture. We then map the texture onto geometry, and then render the geometry to the screen. This technique requires quite a bit of code to implement. If you are creating a XAML app, a XAML and Direct3D app, or a Direct3D with XAML app and you only want to play a video file in a 2-D rectangular area on the screen, using a MediaElement control is a much quicker and easier way to do this. If you are creating a Direct3D app, XAML controls are not supported and so you must use MF to render video. You also must use MF if you want to use video as a texture on 3-D geometry.
This walkthrough starts with the Direct3D app project template. Then, in two phases we create an example app. First, we’ll create a wrapper class called MediaEnginePlayer, which wraps the functionality of the MF APIs. Then, we’ll modify the CubeRenderer class, which is included in the project template, to use the MediaEnginePlayer class.
Setting up the project
To get started, we need to create a new project and configure it to use the Media Foundation library.
To set up the project
In Visual Studio, on the File menu, select New Project. Under Templates, expand C++, and then select Windows Phone. In the list, select Windows Phone Direct3D App (Native Only). Name the project whatever you like, and then click OK.
To add the Media Foundation static library to the linker’s input, on the Project menu, select [project name] Properties. In the left pane of the Property Pages window, expand Configuration Properties, expand Linker, and then select Input. In the center pane, on the Additional Dependencies line, add the text “mfplat.lib;” to the beginning of the line. Be careful not to change any of the existing input parameters.
Creating the MediaEnginePlayer class
The MediaEnginePlayer class serves as a helper class that wraps the MF APIs. We’ll also define a second class, MediaEngineNotify, to receive notifications from the Media Engine. However, this class is very simple and therefore we can define it in the same .cpp implementation file as MediaEnginePlayer.
To implement the MediaEnginePlayer class
On the Project menu, select Add Class. Select Visual C++, and then select C++ class. Click Add. This displays the Generic C++ Class Wizard. In the Class name text box, type MediaEnginePlayer, and then click Finish. This adds the files MediaEnginePlayer.cpp and MediaEnginePlayer.h to your project.
Replace the contents of MediaEnginePlayer.h with the following code.
#pragma once #include "DirectXHelper.h" #include <wrl.h> #include <mfmediaengine.h> #include <strsafe.h> #include <mfapi.h> #include <agile.h> using namespace std; using namespace Microsoft::WRL; using namespace Windows::Foundation; using namespace Windows::UI::Core; using namespace Windows::Storage; using namespace Windows::Storage::Streams; // MediaEngineNotifyCallback - Defines the callback method to process media engine events. struct MediaEngineNotifyCallback abstract { virtual void OnMediaEngineEvent(DWORD meEvent) = 0; }; class MediaEnginePlayer : public MediaEngineNotifyCallback { ComPtr<IMFMediaEngine> m_spMediaEngine; ComPtr<IMFMediaEngineEx> m_spEngineEx; MFARGB m_bkgColor; public: MediaEnginePlayer(); ~MediaEnginePlayer(); // Media Info void GetNativeVideoSize(DWORD *cx, DWORD *cy); bool IsPlaying(); // Initialize/Shutdown void Initialize(ComPtr<ID3D11Device> device, DXGI_FORMAT d3dFormat); void Shutdown(); // Media Engine related void OnMediaEngineEvent(DWORD meEvent); // Media Engine Actions void Play(); void Pause(); void SetMuted(bool muted); // Media Source void SetSource(Platform::String^ sourceUri); void SetBytestream(IRandomAccessStream^ streamHandle, Platform::String^ szURL); // Transfer Video Frame void TransferFrame(ComPtr<ID3D11Texture2D> texture, MFVideoNormalizedRect rect, RECT rcTarget); private: bool m_isPlaying; };
Following the include and using directives, a struct called MediaEngineCallback is defined, which contains one method, OnMediaEngineEvent. The MediaEnginePlayer class derives from this struct. This inheritance structure allows the MediaEngineNotify class, which we’ll define in the .cpp implementation file, to pass to the MediaEnginePlayer class events that it receives from the Media Engine. Next, we declare smart pointers for the IMFMediaEngine and IMFMediaEngineEx interfaces, and declare a MFARGB struct, which sets the background color used by the Media Engine. Also, the Boolean variable m_IsPlaying tracks the playback state of the Media Engine. The rest of the header file is made up of declarations for class methods that will be described as we walk through the MediaEnginePlayer.cpp file.
Next, we modify the MediaEnginePlayer.cpp file. Below the include directives, paste the following definition of the MediaEngineNotify class.
class MediaEngineNotify : public IMFMediaEngineNotify { long m_cRef; MediaEngineNotifyCallback* m_pCB; public: MediaEngineNotify() : m_cRef(1), m_pCB(nullptr) { } STDMETHODIMP QueryInterface(REFIID riid, void** ppv) { if(__uuidof(IMFMediaEngineNotify) == riid) { *ppv = static_cast<IMFMediaEngineNotify*>(this); } else { *ppv = nullptr; return E_NOINTERFACE; } AddRef(); return S_OK; } STDMETHODIMP_(ULONG) AddRef() { return InterlockedIncrement(&m_cRef); } STDMETHODIMP_(ULONG) Release() { LONG cRef = InterlockedDecrement(&m_cRef); if (cRef == 0) { delete this; } return cRef; } void MediaEngineNotifyCallback(MediaEngineNotifyCallback* pCB) { m_pCB = pCB; } // EventNotify is called when the Media Engine sends an event. STDMETHODIMP EventNotify(DWORD meEvent, DWORD_PTR param1, DWORD param2) { if (meEvent == MF_MEDIA_ENGINE_EVENT_NOTIFYSTABLESTATE) { SetEvent(reinterpret_cast<HANDLE>(param1)); } else { m_pCB->OnMediaEngineEvent(meEvent); } return S_OK; } };
This class implements the IMFMediaEngineNotify interface. The Media Engine uses this interface to send notifications about state changes, for example, when a supplied video stream is ready to play or if playback has been stopped. MediaEngineNotify has a member variable of type MediaEngineNotifyCallback, which is the base for our MediaEnginePlayer class. The MediaEngineNotify class keeps a reference to the MediaEnginePlayer class in this variable so it can pass it Media Engine events.
After its constructor, the MediaEngineNotify class provides implementations of the standard COM interface methods, QueryInterface, AddRef, and Release. Next, the MediaEngineNotifyCallback method is provided so that the MediaEnginePlayer class can register itself to receive Media Engine events.
Finally, the IMFMediaEngineNotify::EventNotify event is defined. This method is called by the Media Engine for when an event is raised. The MF_MEDIA_ENGINE_EVENT_NOTIFYSTABLESTATE event is a waitable event that’s raised before it loads the provided media. The thread on which the media is loaded waits for you to signal the event by calling SetEvent before it proceeds to load the content. All other events are passed into the MediaEnginePlayer class.
Next, we’ll define the constructor for the MediaEnginePlayer class, which initializes all of the member variables.
MediaEnginePlayer::MediaEnginePlayer() : m_spMediaEngine(nullptr), m_spEngineEx(nullptr), m_isPlaying(false) { memset(&m_bkgColor, 0, sizeof(MFARGB)); }
Now we’ll define the Initialize method. This method is responsible for starting up the Media Engine, configuring its properties, and hooking up the MediaEngineNotify class.
void MediaEnginePlayer::Initialize( ComPtr<ID3D11Device> device, DXGI_FORMAT d3dFormat) { ComPtr<IMFMediaEngineClassFactory> spFactory; ComPtr<IMFAttributes> spAttributes; ComPtr<MediaEngineNotify> spNotify; DX::ThrowIfFailed(MFStartup(MF_VERSION)); UINT resetToken; ComPtr<IMFDXGIDeviceManager> DXGIManager; DX::ThrowIfFailed(MFCreateDXGIDeviceManager(&resetToken, &DXGIManager)); DX::ThrowIfFailed(DXGIManager->ResetDevice(device.Get(), resetToken)); // Create our event callback object. spNotify = new MediaEngineNotify(); if (spNotify == nullptr) { DX::ThrowIfFailed(E_OUTOFMEMORY); } spNotify->MediaEngineNotifyCallback(this); // Set configuration attribiutes. DX::ThrowIfFailed(MFCreateAttributes(&spAttributes, 1)); DX::ThrowIfFailed(spAttributes->SetUnknown(MF_MEDIA_ENGINE_DXGI_MANAGER, (IUnknown*) DXGIManager.Get())); DX::ThrowIfFailed(spAttributes->SetUnknown(MF_MEDIA_ENGINE_CALLBACK, (IUnknown*) spNotify.Get())); DX::ThrowIfFailed(spAttributes->SetUINT32(MF_MEDIA_ENGINE_VIDEO_OUTPUT_FORMAT, d3dFormat)); // Create MediaEngine. DX::ThrowIfFailed(CoCreateInstance(CLSID_MFMediaEngineClassFactory, nullptr, CLSCTX_ALL, IID_PPV_ARGS(&spFactory))); DX::ThrowIfFailed(spFactory->CreateInstance(0, spAttributes.Get(), &m_spMediaEngine)); // Create MediaEngineEx DX::ThrowIfFailed(m_spMediaEngine.Get()->QueryInterface(__uuidof(IMFMediaEngine), (void**) &m_spEngineEx)); return; }
The arguments to Initialize are an ID3D11Device and a DXGI_FORMAT value. The Media Engine uses the device to copy video frames into the texture supplied by our app. The DXGI_FORMAT value lets the Media Engine know the format of the texture.
The first thing Initialize does, after declaring some local variables, is to call MFStartup, which initializes Media Foundation. This call should be made before trying to create an instance of any of the MF interfaces. Next, a DXGI Device Manager is created with MFCreateDeviceManager, and then MFCreateDeviceManager::ResetDevice is called. The DXGI Device Manager gives the Media Engine the ability to share the app’s graphics device.
Next, an instance of the MediaEngineNotify class that was defined earlier in the file is created, and you call MediaEngineNotifyCallback to register the MediaEnginePlayer to receive the event callbacks.
The next several lines of code initialize the Media Engine. First, we create an IMFAttributes interface and set a few attributes. The MF_MEDIA_ENGINE_DXGI_MANAGER attribute is set to the DXGI Device Manager we created earlier. This puts the Media Engine into frame server mode, which is the only supported mode on the phone. Then, the class we created to receive callbacks is registered with the MD_MEDIA_ENGINE_CALLBACK attribute and the texture format is set through using MF_MEDIA_ENGINE_VIDEO_OUTPUT_FORMAT. Next, a MFMediaEngineClassFactory is created using the attributes that we just defined, and then you call MFMediaEngineClassFactory::CreateInstance to get an instance of IMFMediaEngine. Finally, you call QueryInterface to get a pointer to the interface.
Next, we define two methods to use to set the video source that the Media Engine will render to a texture, so that you can specify either a file URI or a byte stream. The first method takes the URI of the video file in the form of a Platform::String. The Media Engine expects the string to be in the form of a BSTR, so this method allocates a BSTR, copies the URI string into it, and then calls IMFMediaEngine::SetSource to set the source URI for the Media Engine.
void MediaEnginePlayer::SetSource(Platform::String^ szURL) { BSTR bstrURL = nullptr; if(nullptr != bstrURL) { ::CoTaskMemFree(bstrURL); bstrURL = nullptr; } size_t cchAllocationSize = 1 + ::wcslen(szURL->Data()); bstrURL = (LPWSTR)::CoTaskMemAlloc(sizeof(WCHAR)*(cchAllocationSize)); if (bstrURL == 0) { DX::ThrowIfFailed(E_OUTOFMEMORY); } StringCchCopyW(bstrURL, cchAllocationSize, szURL->Data()); m_spMediaEngine->SetSource(bstrURL); return; }
The second method for setting the source content takes an IRandomAccessStream and a URI as arguments. Here too, the URI is converted to a BSTR. Then, you call the MFCreateMFByteStreamOnStreamEx function to wrap the IRandomAccessStream in a stream that the Media Engine can use. Finally, we call IMFMediaEngineEx::SetSourceFromByteStream to set the source for the Media Engine. This is the only method of the IMFMediaEngineEx interface used in this example, and one of only three methods of this interface that are supported on the phone.
void MediaEnginePlayer::SetBytestream(IRandomAccessStream^ streamHandle, Platform::String^ szURL) { ComPtr<IMFByteStream> spMFByteStream = nullptr; BSTR bstrURL = nullptr; if(nullptr != bstrURL) { ::CoTaskMemFree(bstrURL); bstrURL = nullptr; } size_t cchAllocationSize = 1 + ::wcslen(szURL->Data()); bstrURL = (LPWSTR)::CoTaskMemAlloc(sizeof(WCHAR)*(cchAllocationSize)); if (bstrURL == 0) { DX::ThrowIfFailed(E_OUTOFMEMORY); } StringCchCopyW(bstrURL, cchAllocationSize, szURL->Data()); DX::ThrowIfFailed(MFCreateMFByteStreamOnStreamEx((IUnknown*)streamHandle, &spMFByteStream)); DX::ThrowIfFailed(m_spEngineEx->SetSourceFromByteStream(spMFByteStream.Get(), bstrURL)); return; }
Next, we define the OnMediaEngineEvent method. Remember that this method is called by our MediaEngineNotify class as events arrive from the Media Engine. A DWORD indicates the event that occurred. This example shows several different events, but only two of them are handled. The MF_MEDIA_ENGINE_EVENT_CANPLAY event is raised when the Media Engine has successfully loaded the media stream and it’s ready to play. In this example, the Play method, which will be described soon, is called. The MF_MEDIA_ENGINE_EVENT_ERROR event is raised when the Media Engine has an error. In this example, the error code is retrieved, but no further action is taken.
void MediaEnginePlayer::OnMediaEngineEvent(DWORD meEvent) { switch (meEvent) { case MF_MEDIA_ENGINE_EVENT_LOADEDMETADATA: break; case MF_MEDIA_ENGINE_EVENT_CANPLAY: Play(); break; case MF_MEDIA_ENGINE_EVENT_PLAY: break; case MF_MEDIA_ENGINE_EVENT_PAUSE: break; case MF_MEDIA_ENGINE_EVENT_ENDED: break; case MF_MEDIA_ENGINE_EVENT_TIMEUPDATE: break; case MF_MEDIA_ENGINE_EVENT_ERROR: if(m_spMediaEngine) { ComPtr<IMFMediaError> error; m_spMediaEngine->GetError(&error); USHORT errorCode = error->GetErrorCode(); } break; } return; }
Next, we define several methods that are simply wrappers for methods exposed by IMFMediaEngine. Play, Pause, SetMuted, and GetNativeVideoSize allow the main app to call these methods easily. Note that in Play and Pause the member variable m_isPlaying is updated to track the current playback state. An accessor function also is provided so the main app can query the playback state.
void MediaEnginePlayer::Play() { if (m_spMediaEngine) { DX::ThrowIfFailed(m_spMediaEngine->Play()); m_isPlaying = true; } return; }
void MediaEnginePlayer::SetMuted(bool muted) { if (m_spMediaEngine) { DX::ThrowIfFailed(m_spMediaEngine->SetMuted(muted)); } return; }
void MediaEnginePlayer::GetNativeVideoSize(DWORD *cx, DWORD *cy) { if (m_spMediaEngine) { m_spMediaEngine->GetNativeVideoSize(cx, cy); } return; }
bool MediaEnginePlayer::IsPlaying() { return m_isPlaying; }
The main app calls the TransferFrame method once per frame. This is when the Media Engine actually copies a frame of video into a supplied texture. The arguments to the method are the texture into which the video frame should be copied, the source rectangle from the video frame that should be copied from, and the destination rectangle in the texture that the video frame should be copied to. The method checks to make sure that the Media Engine instance is not null and that the m_isPlaying variable indicates that the video should be playing. Next we call IMFMediaEngine::OnVideoStreamTick. This method returns S_OK if the Media Engine has a new frame of video ready to be rendered. If so, IMFMediaEngine::TransferVideoFrame is called to perform the actual copying of the video frame into the texture.
Note
IMFMediaEngine::TransferVideoFrame may occasionally return E_FAIL when you’re seeking in the video stream (using IMFMediaEngine::SetCurrentTime) or when changing video streams. Your app should simply ignore this error and call the method again when the next frame is available. In this example, an exception is thrown that can be caught by the caller.
``` cpp
void MediaEnginePlayer::TransferFrame(ComPtr<ID3D11Texture2D> texture, MFVideoNormalizedRect rect ,RECT rcTarget)
{
if (m_spMediaEngine != nullptr && m_isPlaying)
{
LONGLONG pts;
if (m_spMediaEngine->OnVideoStreamTick(&pts) == S_OK)
{
// new frame available at the media engine so get it
DX::ThrowIfFailed(
m_spMediaEngine->TransferVideoFrame(texture.Get(), &rect, &rcTarget, &m_bkgColor)
);
}
}
return;
}
```
The last methods in the MediaEnginePlayer class are the destructor and the Shutdown method. These methods first call IMFMediaEngine::Shutdown to shut down the Media Engine instance and release its resources. Then it calls MFShutdown to shut down Media Foundation.
MediaEnginePlayer::~MediaEnginePlayer() { Shutdown(); MFShutdown(); }
void MediaEnginePlayer::Shutdown() { if (m_spMediaEngine) { m_spMediaEngine->Shutdown(); } return; }
Modifying the Direct3D app template to use the MediaEnginePlayer class
The MediaEnginePlayer class we created in the previous section is intended to be a general-purpose wrapper for Media Engine functionality. You instantiate the class, call Initialize, set the video stream source, and call Play. Then you pass a texture in and call TransferFrame, passing in the texture you would like the video frame rendered to. How you use the player depends on the design of your app. This section walks you through the process of modifying the Direct3D app template to display the rendered video in a rectangle on the screen. This is just one way to use video that has been rendered to a texture.
This section of the walkthrough modifies only the CubeRenderer class that’s included in the Direct3D app template. To help keep the walkthrough as concise as possible, only the methods of the CubeRenderer source files that need to be modified will be shown here. Methods not listed here should be left unchanged for this example. If left unchanged, the Direct3D app template displays a spinning cube that is shaded using vertex colors. The changes to CubeRenderer will modify this to display a square that is shaded with a texture.
To modify the CubeRenderer class
First you’ll want to modify CubeRenderer.h. At the top of the file, add an include statement to include the MediaEnginePlayer header file.
#include "MediaEnginePlayer.h"
Add the SimpleVertex struct definition. This struct specifies the format of the vertices that we’ll pass into the render pipeline. The default app template uses vertices with position and color. This example uses vertices with position and texture coordinates so that the video can be texture-mapped onto the geometry.
Next, at the end of the CubeRenderer class declaration, declare the following member variables.
Microsoft::WRL::ComPtr<ID3D11Texture2D> m_texture; Microsoft::WRL::ComPtr<ID3D11ShaderResourceView> m_textureView; Microsoft::WRL::ComPtr<ID3D11SamplerState> m_sampler; MediaEnginePlayer* m_player;
The ID3D11Texture2D variable stores the texture to which video will be rendered. The ID3D11ShaderResourceView binds the texture resource to the shader. The ID3D11SamplerState defines how the texture is sampled by the shader, for example, whether the texture should be wrapped, made into a Tile, or clipped. The final new member variable is a pointer to the MediaEnginePlayer class.
Next, we want to modify the CubeRenderer.cpp implementation file. The first method we’ll modify is the CreateDeviceResources method. In this method, resources that need the instance of a graphics device to be created are initialized. This method is quite large and though some of the code will remain the same, for this walkthrough you should delete the existing method and replace it with the code that follows. Because there are so many lines of code in this method, it will be built incrementally over the next several steps.
Paste the first part of the method into the CubeRenderer class definition after the constructor.
The first line calls the base implementation of CreateDeviceResources. This is where the graphics device is created. Next, you use helper methods from the DirectXHelper.h file included in the project template to read in the vertex shader and the pixel shader asynchronously. These methods return task objects which are actually executed in subsequent code.
Next, paste in the code for the createVSTask task.
auto createVSTask = loadVSTask.then([this](Platform::Array<byte>^ fileData) { DX::ThrowIfFailed( m_d3dDevice->CreateVertexShader( fileData->Data, fileData->Length, nullptr, &m_vertexShader ) ); const D3D11_INPUT_ELEMENT_DESC vertexDesc[] = { { "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 }, { "TEXCOORD", 0, DXGI_FORMAT_R32G32_FLOAT, 0, 12, D3D11_INPUT_PER_VERTEX_DATA, 0 } }; DX::ThrowIfFailed( m_d3dDevice->CreateInputLayout( vertexDesc, ARRAYSIZE(vertexDesc), fileData->Data, fileData->Length, &m_inputLayout ) ); });
In this section, the graphics device is used to create a vertex shader from the shader file stream. Then the vertex description that the vertex shader will use is defined. Note that each vertex has a position and texture coordinate – the default template uses vertices with position and vertex color. Next, an ID3D11InputLayout object is created to store the defined input layout.
Then, paste in the code for the createPSTask task.
auto createPSTask = loadPSTask.then([this](Platform::Array<byte>^ fileData) { DX::ThrowIfFailed( m_d3dDevice->CreatePixelShader( fileData->Data, fileData->Length, nullptr, &m_pixelShader ) ); CD3D11_BUFFER_DESC constantBufferDesc(sizeof(ModelViewProjectionConstantBuffer), D3D11_BIND_CONSTANT_BUFFER); DX::ThrowIfFailed( m_d3dDevice->CreateBuffer( &constantBufferDesc, nullptr, &m_constantBuffer ) ); });
In this section, the graphics device is used to create a pixel shader from the shader file stream. Next, a constant buffer is created to store the model, view, and projection matrices. We won’t use these matrices to project our rendered square, but they are part of the default template.
Now, paste in the code for the first part of the createCubeTask task.
auto createCubeTask = (createPSTask && createVSTask).then([this] () { SimpleVertex cubeVertices[] = { { XMFLOAT3( -1.0f, -0.45f, 0.4f ), XMFLOAT2( 0.0f, 1.0f ) }, { XMFLOAT3( -1.0f, 0.45f, 0.4f ), XMFLOAT2( 0.0f, 0.0f ) }, { XMFLOAT3( 1.0f, -0.45f, 0.4f ), XMFLOAT2( 1.0f, 1.0f ) }, { XMFLOAT3( 1.0f, -0.45f, 0.4f ), XMFLOAT2( 1.0f, 1.0f ) }, { XMFLOAT3( -1.0f, 0.45f, 0.4f ), XMFLOAT2( 0.0f, 0.0f ) }, { XMFLOAT3( 1.0f, 0.45f, 0.4f ), XMFLOAT2( 1.0f, 0.0f ) }, }; D3D11_SUBRESOURCE_DATA vertexBufferData = {0}; vertexBufferData.pSysMem = cubeVertices; vertexBufferData.SysMemPitch = 0; vertexBufferData.SysMemSlicePitch = 0; CD3D11_BUFFER_DESC vertexBufferDesc(sizeof(cubeVertices), D3D11_BIND_VERTEX_BUFFER); DX::ThrowIfFailed( m_d3dDevice->CreateBuffer( &vertexBufferDesc, &vertexBufferData, &m_vertexBuffer ) ); unsigned short cubeIndices[] = { 0, 1, 2, 3, 4, 5 }; m_indexCount = ARRAYSIZE(cubeIndices); D3D11_SUBRESOURCE_DATA indexBufferData = {0}; indexBufferData.pSysMem = cubeIndices; indexBufferData.SysMemPitch = 0; indexBufferData.SysMemSlicePitch = 0; CD3D11_BUFFER_DESC indexBufferDesc(sizeof(cubeIndices), D3D11_BIND_INDEX_BUFFER); DX::ThrowIfFailed( m_d3dDevice->CreateBuffer( &indexBufferDesc, &indexBufferData, &m_indexBuffer ) );
This section of code creates a vertex buffer that defines the points that make up the geometry that will render. The vertices are of the type SimpleVertex, which we defined in CubeRenderer.h. The default template defines a cube, but this code defines six points that represent a rectangle made up of two triangles. The locations of these points were chosen to provide a square of the same aspect ratio of the video source. You can use any geometry you want. It doesn’t need to be rectangular or maintain any particular aspect ratio.
Next, we create an index buffer. This buffer defines the order in which the vertices should be drawn. To do this, paste in the next section of the createCubeTask task.
DX::ThrowIfFailed( m_d3dDevice->CreateTexture2D( &CD3D11_TEXTURE2D_DESC( DXGI_FORMAT_B8G8R8A8_UNORM, 320, // Width 240, // Height 1, // MipLevels 1, // ArraySize D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_RENDER_TARGET ), nullptr, &m_texture ) ); DX::ThrowIfFailed( m_d3dDevice->CreateShaderResourceView( m_texture.Get(), &CD3D11_SHADER_RESOURCE_VIEW_DESC( m_texture.Get(), D3D11_SRV_DIMENSION_TEXTURE2D ), &m_textureView ) ); D3D11_SAMPLER_DESC samplerDescription; ZeroMemory(&samplerDescription, sizeof(D3D11_SAMPLER_DESC)); samplerDescription.Filter = D3D11_FILTER_ANISOTROPIC; samplerDescription.AddressU = D3D11_TEXTURE_ADDRESS_WRAP; samplerDescription.AddressV = D3D11_TEXTURE_ADDRESS_WRAP; samplerDescription.AddressW = D3D11_TEXTURE_ADDRESS_WRAP; samplerDescription.MipLODBias = 0.0f; samplerDescription.MaxAnisotropy = 4; samplerDescription.ComparisonFunc = D3D11_COMPARISON_NEVER; samplerDescription.BorderColor[0] = 0.0f; samplerDescription.BorderColor[1] = 0.0f; samplerDescription.BorderColor[2] = 0.0f; samplerDescription.BorderColor[3] = 0.0f; samplerDescription.MinLOD = 0; samplerDescription.MaxLOD = D3D11_FLOAT32_MAX; DX::ThrowIfFailed( m_d3dDevice->CreateSamplerState( &samplerDescription, &m_sampler) ); });
This section initializes the three member variables that are required for shading our square with a texture. First the ID3D11Texture2D is created. Note that the texture is created with the DXGI_FORMAT_B8G8R8A8_UNORM pixel format. This format will be passed into the Media Engine later to make sure the video is rendered with the same pixel format. Then the ID3D11ShaderResourceView and ID3D11SamplerState are initialized. This bit of code completes the createCubeTask task.
The following code is the final section of the CreateDeviceResources method.
The last part of this method creates a new instance of our MediaEnginePlayer class and calls Initialize, passing in the same pixel format with which we initialized the texture. Next, a video file is opened and passed into the SetByteStream method. Also shown, but commented out, is the code to pass the URI of a video file to the SetSource method.
The final method we need to modify is the Render method. We’ll also build this method incrementally over several steps. The beginning of the Render method is the same as the default template.
void CubeRenderer::Render() { const float midnightBlue[] = { 0.098f, 0.098f, 0.439f, 1.000f }; m_d3dContext->ClearRenderTargetView( m_renderTargetView.Get(), midnightBlue ); m_d3dContext->ClearDepthStencilView( m_depthStencilView.Get(), D3D11_CLEAR_DEPTH, 1.0f, 0 ); // Only draw the cube once it is loaded (loading is asynchronous). if (!m_loadingComplete) { return; }
Now, paste the section of this method that renders video to the texture. This section of code defines the source and destination rectangles for rendering video and then calls TransferFrame, passing in the rectangles and the target texture. Remember that TransferFrame method throws an exception if IMFMediaEngine::TransferVideoFrame returns an error. If this occurs, the app simply catches the exception and calls return to skip the current frame.
RECT r; r.top = 0.0f; r.left = 0.0f; r.bottom = 240.0f; r.right = 320.0f; MFVideoNormalizedRect rect; rect.top = 0.0f; rect.left = 0.0f; rect.right = 1.0f; rect.bottom = 1.0f; try { m_player->TransferFrame(m_texture, rect, r); } catch (Platform::Exception ^ ex) { // Occasionally TransferFrame fails // Wait for next frame to draw return ; }
The next section of the method sets the graphics device’s render target and retrieves the buffer that contains the transform matrices. This section is unchanged from the default template.
m_d3dContext->OMSetRenderTargets( 1, m_renderTargetView.GetAddressOf(), m_depthStencilView.Get() ); m_d3dContext->UpdateSubresource( m_constantBuffer.Get(), 0, NULL, &m_constantBufferData, 0, 0 );
Next, we set the vertex buffer as the current buffer for the graphics device. Note that the SimpleVertex struct defined in CubeRenderer.h specifies the number of bytes per vertex.
The next section of the Render method is the same as the default app template. It sets the active index buffer, pixel shader, and vertex shader.
m_d3dContext->IASetIndexBuffer( m_indexBuffer.Get(), DXGI_FORMAT_R16_UINT, 0 ); m_d3dContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST); m_d3dContext->IASetInputLayout(m_inputLayout.Get()); m_d3dContext->VSSetShader( m_vertexShader.Get(), nullptr, 0 ); m_d3dContext->VSSetConstantBuffers( 0, 1, m_constantBuffer.GetAddressOf() ); m_d3dContext->PSSetShader( m_pixelShader.Get(), nullptr, 0 );
The next bit of code binds the texture to the pixel shader and sets the sampler that indicates how the texture should be sampled.
The final piece of the Render method is the same as the default template. It simply calls ID3D11DeviceContext::DrawIndexed to render the square to the screen.
Modifying the vertex and pixel shaders
Before you can run the app, you need to modify the vertex shader and the pixel shader. You need to modify both shaders to use texture mapping instead of vertex colors to shade the input geometry. Also, the shaders that are included in the default template expect the vertices supplied to them to have a position and a color. Because we have modified the app to use vertices with position and texture coordinates, the shaders must be modified to use this pixel format.
Replace the contents of SimpleVertexShader.hlsl with the following code.
Texture2D colorMap_ : register( t0 );
SamplerState colorSampler_ : register( s0 );
struct VS_Input
{
float4 pos : POSITION;
float2 tex0 : TEXCOORD0;
};
struct PS_Input
{
float4 pos : SV_POSITION;
float2 tex0 : TEXCOORD0;
};
PS_Input main(VS_Input vertex )
{
PS_Input vsOut = ( PS_Input )0;
vsOut.pos = vertex.pos;
vsOut.tex0 = vertex.tex0;
return vsOut;
}
Replace the contents of SimplePixelShader with the following code.
Texture2D colorMap_ : register( t0 );
SamplerState colorSampler_ : register( s0 );
struct PS_Input
{
float4 pos : SV_POSITION;
float2 tex0 : TEXCOORD0;
};
float4 main( PS_Input frag ) : SV_TARGET
{
return colorMap_.Sample( colorSampler_, frag.tex0 );
}
This completes the walkthrough of creating an app that uses Media Engine to render video to a texture.