4 - Encoding and Processing Media in Microsoft Azure Media Services

patterns & practices Developer CenterDownload Reference ImplementationDownload book

Microsoft Azure Media Services enables you to encode your video to a variety of devices, ranging from desktop PCs to smartphones. To do this you create processing jobs which enable you to schedule and automate the encoding of assets.

This chapter describes how the Contoso developers incorporated Media Services' encoding and processing functionality into their web service. It summarizes the decisions that they made in order to support their business requirements, and how they designed the code that performs the encoding process.

Introduction to video encoding

Uncompressed digital video files can be large and would be too big to deliver over the Internet without first compressing them. Encoding is the process of compressing video and audio using codecs. The quality of the encoded content is determined by the amount of data that is thrown away when the content is compressed. There are many factors that affect what data is thrown away during the compression process, but generally the more complex the data is and the higher the compression ratio, the more data is thrown away. In addition, people watch videos on a variety of devices including TVs with set top boxes, desktop PCs, tablets, and smartphones. Each of these devices has different bandwidth and compression requirements.

Codecs both compress and decompress digital media files. Audio codecs compress and decompress audio, while video codecs compress and decompress video. Lossless codecs preserve all of the data during the compression process. When the file is decompressed the result is a file that is identical to the input file. Lossy codecs throw away some of the data when encoding, and produce smaller files than lossless codecs. The two main codecs used by Media Services to encode are H.264 and VC-1.

Encoders are software or hardware implementations that compress digital media using codecs. Encoders usually have settings that allow you to specify properties of the encoded media, such as the resolution, bitrate, and file format. File formats are containers that hold the compressed media as well as data about the codecs that were used during the compression process. For a list of the codecs and file formats supported by Media Services for import see "Supported input formats." The following table lists the codecs and file formats that are supported for export.

File format

Video codec

Audio codec

Windows Media (*.wmv, *.wma)

VC-1 (Simple, Main, and Advanced profiles)

Windows Media Audio (Standard, Professional, Voice, Lossless)

MP4 (.mp4)

H.264 (Baseline, Main, and High profiles)

AAC-LC, HE-AAC v1, HE-AAC v2, Dolby Digital Plus

Smooth Streaming (PIFF 1.1) (*.ismv, *.isma)

VC-1 (Advanced profile)

H.264 (Baseline, Main, and High profiles)

Windows Media Audio (Standard, Professional)

AAC-LC, HE-AAC v1, HE-AAC v2

For information about additional supported codecs and filters in Media Services, see "Codec Objects" and "DirectShow Filters."

Resolution specifies how many lines make up a full video image. Typically resolutions are 1080p and 720p for high definition, and 480p for standard definition. The bitrate of a video is the number of bits recorded per sec, and is usually specified as kilobits per second (kbps). The higher the bitrate the higher the quality of video. Videos can be encoded using a constant bitrate or a variable bitrate.

In constant bitrate encoding (CBR) a maximum bitrate is specified that the encoder can generate. If the video being encoded requires a higher bitrate then the resulting video will be of poor quality. CBR encoding is useful when there's a requirement to stream a video at a predictable bit rate with a consistent bandwidth utilization.

While CBR encoding aims to maintain the bit rate of the encoded media, variable bit rate (VBR) encoding aims to achieve the best possible quality of the encoded media. A higher bitrate is used for more complex scenes, with a lower bitrate being used for less complex scenes. VBR encoding is more computation intensive, and often involves multiple passes when encoding video.

Encoding for delivery using Azure Media Services

Media Services provides a number of media processors that enable video to be processed. Media processors handle a specific processing task, such as encoding, format conversion, encrypting, or decrypting media content. Encoding video is the most common Media Services processing operation, and it is performed by the Azure Media Encoder. The Media Encoder is configured using encoder preset strings, with each preset specifying a group of settings required for the encoder. For a list of all the presets see "Appendix B –Azure Media Encoder Presets."

Media Services supports progressive download of video and streaming. When encoding for progressive download you encode to a single bitrate. However, you could encode a video multiple times and have a collection of single bitrate files from which a client can choose. When a client chooses a bitrate the entire video will be displayed at that bitrate. However, if network conditions degrade playback of the video may pause while enough data is buffered to be able to continue.

To be able to stream content it must first be converted into a streaming format. This can be accomplished by encoding content directly into a streaming format, or converting content that has already been encoded into H.264 into a streaming format. The second option is performed by the Azure Media Packager, which changes the container that holds the video, without modifying the video encoding. The Media Packager is configured through XML rather than through string presets. For more information about the XML configuration see "Task Preset for Azure Media Packager."

There are two types of streaming offered by Media Services:

  • Single bitrate streaming
  • Adaptive bitrate streaming

With single bitrate streaming a video is encoded to a single bitrate stream and divided into chunks. The stream is delivered to the client one chunk at a time. The chunk is displayed and the client then requests the next chunk. When encoding for single bitrate streaming you can encode to a number of different bitrate streams and clients can select a stream. With single bitrate streaming, once a bitrate stream is chosen the entire video will be displayed at that bitrate.

When encoding for adaptive bitrate streaming you encode to an MP4 bitrate set that creates a number of different bitrate streams. These streams are also broken into chunks. However, adaptive bitrate technologies allow the client to determine network conditions and select from among several bitrates. When network conditions degrade, the client can select a lower bitrate allowing the video to continue to play at a lower video quality. Once network conditions improve the client can switch back to a higher bitrate with improved video quality.

Media Services supports three adaptive bitrate streaming technologies:

  • Smooth streaming, created by Microsoft
  • HTTP Live Streaming (HLS), created by Apple
  • MPEG-DASH, an ISO standard

Media Services enables you to encode and stream video to a variety of devices. The following table summarizes the streaming technology supported by different device types.

Device type

Supports

Example presets

Windows

Smooth streaming

MPEG-DASH

H264 Broadband 1080p

H264 Adaptive Bitrate MP4 Set 1080p

H264 Smooth Streaming 1080p

Xbox

Smooth streaming

H264 Smooth Streaming 720p Xbox Live ADS

iOS

HLS

Smooth streaming (with the Smooth Streaming Porting Kit)

H264 Broadband 1080p

H264 Adaptive Bitrate MP4 Set 1080p

H264 Smooth Streaming 1080p

Android

Smooth Streaming via the OSMF plug-in, when the device supports Flash

HLS (Android OS 3.1 and greater)

H264 Broadband 720p

H264 Smooth Streaming 720p

Smooth streaming is the preferred adaptive bitrate streaming technology for Microsoft platforms. There are a number of approaches to creating smooth streaming assets:

  • Encode your single video file using one of the H.264 smooth streaming task presets to encode directly into smooth streaming. For more information see "Appendix B –Azure Media Encoder Presets."
  • Encode your single video file using one of the H.264 adaptive bitrate task presets using the Azure Media Encoder, and then use the Azure Media Packager to convert the adaptive bitrate MP4 files to smooth streaming.
  • Encode your video locally to a variety of bit rates, and then create a manifest file describing the video files. After uploading the files to the Azure Storage account associated with your Azure Media account, use the Azure Media Packager to convert the MP4 files into smooth streaming files.
  • Encode your video to MP4 and using dynamic packaging to automatically convert the MP4 to smooth streaming. For more information about dynamic packaging see "Dynamic packaging."
Dn735907.note(en-us,PandP.10).gifBharath says:
Bharath If you intend to protect your content with PlayReady you should use the Azure Media Encoder to encode directly to Smooth Streaming, and then use the Azure Media Packager to protect your media.

Note

To convert WMV files to Smooth Streaming you must first encode your files from WMV to H.264. WMV is a video codec that typically has an ASF container format, with H.264 being a video codec that can be used with the MP4 container format. Smooth Streaming uses a variant of MP4 called fragmented MP4, or F-MP4. Smooth Streaming is an adaptive streaming format that requires a set of files of different bitrates, all encoded with fragments that are aligned across the bitrates. Therefore, a set of MP4 files that are encoded with aligned fragments can be converted to F-MP4 without requiring a re-encode. However, this is not the case with WMV files. For more information see "Smooth Streaming Technical Overview."

Creating encoding jobs in Azure Media Services

After media has been uploaded into Media Services it can be encoded into one of the formats supported by the Media Services Encoder. Media Services Encoder supports encoding using the H.264 and VC-1 codecs, and can generate MP4 and Smooth Streaming content. However, MP4 and Smooth Streaming content can be converted to Apple HLS v3 or MPEG-DASH by using dynamic packaging. For more information about dynamic packaging see "Dynamic packaging." For information about the input and output formats supported by Media Services see "Supported input formats" and "Introduction to encoding."

Encoding jobs are created and controlled using a Job. Each Job contains metadata about the processing to be performed, and contains one or more Tasks that specify a processing task, its input Assets, output Assets, and a media processor and its settings. The following figure illustrates this relationship.

The relationship between jobs, tasks, and assets

The relationship between jobs, tasks, and assets

Tasks within a Job can be chained together, where the output asset of one task is given as the input asset to the next task. By following this approach one Job can contain all of the processing required for a media presentation.

The maximum number of Tasks per Job is 50.
The maximum number of Assets per Task is 50
The maximum number of Assets per Job is 100. This includes queued, finished, active, and canceled jobs. However, it doesn't include deleted jobs.

Accessing Azure Media Services media processors

A standard task that's required for most processing jobs is to call a specific media processor to process the job.

Dn735907.note(en-us,PandP.10).gifMarkus says:
Markus A media processor is a component that handles a specific processing task such as encoding, format conversion, encryption, or decrypting media content.

The following table summarizes the media processors supported by Media Services.

Media processor name

Description

Azure Media Encoder

Allows you to run encoding tasks using the Media Encoder.

Azure Media Packager

Allows you to convert media assets from MP4 to Smooth Streaming format. In addition, allows you to convert media assets from Smooth Streaming format to HLS format.

Azure Media Encryptor

Allows you to encrypt media assets using PlayReady Protection.

Storage Decryption

Allows you to decrypt media assets that were encrypted using storage encryption.

To use a specific media processor you should pass the name for the processor into the GetLatestMediaProcessorByName method, which is shown in the following code example.

private IMediaProcessor GetLatestMediaProcessorByName(string mediaProcessorName)
{
    var processor = this.context.MediaProcessors.Where(p => p.Name == mediaProcessorName)
        .ToList().OrderBy(p => new Version(p.Version)).LastOrDefault();

    if (processor == null)
    {
        throw new ArgumentException(string.Format("Unknown media processor: {0}", mediaProcessorName));
    }
                
    return processor;
}

The method retrieves the specified media processor and returns a valid instance of it. The following code example shows how you'd use the GetLatestMediaProcessorByName method to retrieve the Azure Media Encoder processor.

IMediaProcessor mediaProcessor = this.GetLatestMediaProcessorByName(MediaProcessorNames.WindowsAzureMediaEncoder);

Securely encoding media within Azure Media Services

When encoding encrypted assets you must specify the encryption option when adding the output asset to the processing task. The encryption of each asset created by a job is controlled by specifying one of the AssetCreationOptions enumeration values for each task in the job.

Any encrypted assets will be decrypted before a processing operation and stored in the encrypted file system on the Azure Compute node that is processing the task. The media processors then perform the required operations on the media stored in the encrypted file system and the output of each task is written to storage.

The following figure summarizes how media can be protected during the encoding and packaging process.

Media encryption options during the encoding and packaging process

Media encryption options during the encoding and packaging process

Dn735907.note(en-us,PandP.10).gifMarkus says:
Markus The Contoso web service does not use any encryption because videos are encoded for progressive download and streaming. However, when developing a commercial video-on-demand service you should encrypt the content both in transit and at rest.

If you want to encode a video and secure it for storage you should specify AssetCreationOptions.StorageEncrypted when creating the output asset for the encoding task. When a storage encrypted asset is downloaded using one of the Media Services SDKs the SDK will automatically decrypt the asset as part of the download process.

If you want to encode and package a video for streaming or progressive download you should specify AssetCreationOptions.None when creating the output asset for the encoding task.

Scaling Azure Media Services encoding jobs

By default each Media Services account can have one active encoding task at a time. However, you can reserve encoding units that allow you to have multiple encoding tasks running concurrently, one for each encoding reserved unit your purchase. New encoding reserved units are allocated almost immediately.

The number of encoding reserved units is equal to the number of media tasks that can be processed concurrently in a given account. For example, if your account has 5 reserved units then 5 media tasks can run concurrently. The remaining tasks will wait in the queue and will be processed sequentially as soon as a running task completes.

If an account doesn't have any reserved units then tasks will be processed sequentially. In this scenario the time between one task finishing and the next one starting will depend on the availability of system resources.

The number of encoding reserved units can be configured on the Encoding page of the Azure Management Portal.

By default every Media Services account can scale to up to 25 encoding reserved units. A higher limit can be requested by opening a support ticket. For more information about opening a support ticket see "Requesting Additional Reserved Units."

For more information about scaling Media Services see "How to Scale a Media Service."

Accessing encoded media in Azure Media Services

Accessing content in Media Services always requires a locator. A locator combines the URL to the media file with a set of time-based access permissions. There are two types of locators – shared access signature locators and on-demand origin locators.

Dn735907.note(en-us,PandP.10).gifBharath says:
Bharath You cannot have more than five unique locators associated with a given asset at one time. This is due to shared access policy restrictions set by Azure Blob Storage service.

A shared access signature locator grants access rights to a specific media asset through a URL. You can grant users who have the URL access to a specific resource for a period of time by using a shared access signature locator, in addition to specifying what operations can be performed on the resource.

On-demand origin locators are used when streaming content to a client application, and are exposed by the Media Services Origin Service which pulls the content from Azure Storage and delivers it to the client. An on-demand origin locator URL will point to a streaming manifest file in an asset. For more information about the origin service see "Origin Service."

Dn735907.note(en-us,PandP.10).gifMarkus says:
Markus Locators are not designed for managing per-user access control. To give different access rights to different individuals, use Digital Rights Management (DRM) solutions.

Encoding process in the Contoso Azure Media Services web service

The following figure shows a high-level overview of the Contoso encoding process. The encoding process is managed by the EncodingService class in the Contoso.Domain.Services.Impl project.

A high-level overview of the Contoso encoding process

A high-level overview of the Contoso encoding process

The EncodingService class in the Contoso web service retrieves the asset****details from the CMS database and passes the encoding job to Media Services, where it's submitted to the Azure Media Encoder. The encoding job and video details are saved to the CMS database while the Media Encoder processes the job, retrieving the input asset from Azure Storage, and writing the output assets to Azure Storage. The Contoso web service always encodes videos to adaptive bitrate MP4s, and then uses dynamic packaging to convert the adaptive bitrate MP4s to Smooth Streaming, HLS, or MPEG-DASH, on demand. When encoding is complete Media Services notifies the EncodingService class, which generates locator URLs to the output assets in Azure Storage, and updates the encoding job and video details in the CMS database. For more information about dynamic packaging see "Dynamic packaging."

This process can be decomposed into the following steps for encoding content with Media Services:

  1. Create a new VideoEncodingMessage and add it to the ContosoEncodingQueue.
  2. Poll the ContosoEncodingQueue and convert the received VideoEncodingMessage to an EncodingRequest.
  3. Delete the VideoEncodingMessage from the ContosoEncodingQueue.
  4. Process the EncodingRequest.
    1. Create a new Job.
    2. Retrieve Azure Media Encoder media processor to process the job.
    3. Create a new EncodingPipeline to encode the video.
    4. Add a VideoEncodingPipelineStep to the EncodingPipeline.
    5. Add a ThumbnailEncodingPipelineStep to the EncodingPipeline, if required.
    6. Add a ClipEncodingPipelineStep to the EncodingPipeline, if required.
    7. Configure the Job.
      1. Create a Task in the VideoEncodingPipelineStep and specify input and output assets for the Task.
      2. Create a Task in the ThumbnailEncodingPipelineStep and specify input and output assets for the Task.
      3. Create a Task in the ClipEncodingPipelineStep and specify input and output assets for the Task.
    8. Submit the Job to Azure Media Services.
      1. A new JobNotificationMessage is added to the ContosoJobNotificationQueue.
    9. Create a new EncodingJob and populate it with job information, before storing it in the CMS database.
    10. Update the EncodingStatus of the Job from NotStarted to Encoding.
  1. Poll the ContosoJobEncodingQueue and convert the received JobNotificationMessage to a JobNotification.
  2. Delete the JobNotificationMessage from the ContosoJobEncodingQueue.
  3. Process the JobNotification.
    1. When the JobState is Finished, retrieve the job and video details from the CMS database.
    2. Process the output assets from the Job.
      1. Process the VideoEncodingPipelineStep output assets.
      2. Create on-demand origin and shared access signature locators for the output asset.
      3. Generate URIs for smooth streaming, HLS, MPEG-DASH, and progressive download versions of the output asset.
      4. Add the URIs to VideoPlay objects, and add the VideoPlay objects to the VideoDetail object.
      5. Process the ThumbnailEncodingPipelineStep output assets.
      6. Create on-demand origin and shared access signature locators for the output asset.
      7. Generate URIs for the thumbnail images.
      8. Add the URIs to VideoThumbnail objects, and add them to the VideoDetail object.
      9. Process the ClipEncodingPipelineStep output assets.
      10. Create on-demand origin and shared access signature locators for the output asset.
      11. Generate URIs for the video clip assets.
      12. Add the URIs to VideoPlay objects, and add the VideoPlay objects to the VideoDetail object.
    3. Save the updated video details and job details to the CMS database.

Media Services has the ability to deliver notification messages to the Azure Storage Queues when processing media jobs. The Contoso developers decided to use Media Services notifications during the encoding process. The advantages of this are that it provides an easy mechanism for managing the encoding jobs submitted by multiple clients, and encoding progress can be monitored through job notification messages, if required.

When a video is uploaded and published it's added to a queue named ContosoEncodingQueue which stores encoding jobs, and then moved to the ContosoJobNotificationQueue for encoding. When the encoding job completes the next step in the content publishing workflow is triggered, which is to process the assets output from the encoding job, and to update the CMS database.

Dn735907.note(en-us,PandP.10).gifBharath says:
Bharath Azure Storage Queues must be polled – they are not a push service.

The following figure shows a high level overview of how the ContosoEncodingQueue and ContosoJobNotification queue are used in the encoding process. The diagram shows the method names in the EncodingService class that initiate and manage the encoding process.

Use of queues in the encoding process

Use of queues in the encoding process

Note

Azure Storage Queues do not provide a guaranteed first-in-first-out (FIFO) delivery. For more information see "Azure Queues and Azure Service Bus Queues – Compared and Contrasted."

As mentioned in the previous chapter, the PublishAsset method in the EncodingService class is responsible for starting the encoding process.

public async Task PublishAsset(VideoPublish video)
{
    ...
    var videoEncodingMessage = new EncodeVideoMessage() 
    { 
        AssetId = video.AssetId, 
        VideoId = video.VideoId, 
        IncludeThumbnails = true,
        Resolution = video.Resolution
    };

    ...

    IAzureQueue<EncodeVideoMessage> queue = new AzureQueue<EncodeVideoMessage>(
        Microsoft.WindowsAzure.CloudStorageAccount.Parse(
        CloudConfiguration.GetConfigurationSetting("WorkerRoleConnectionString")),
        CloudConfiguration.GetConfigurationSetting("ContosoEncodingQueueName"),
        TimeSpan.FromSeconds(300));
    queue.AddMessage(videoEncodingMessage);
}

The method creates an EncodeVideoMessage instance and adds it an AzureQueue named ContosoEncodingQueue. Every video that will be encoded or otherwise processed by Media Services must be added to this queue. Each EncodeVideoMessage instance contains properties that specify the details of the video to be encoded.

The Contoso.Azure project specifies a worker role named Contoso.EncodingWorker that is responsible for managing the two queues used in the encoding process. When a video is published it's added to the ContosoEncodingQueue for encoding, and once the encoding is complete it's moved to the ContosoJobNotificationQueue. The Contoso.EncodingWorker project contains the classes that make up the worker role.

The Run method in the WorkerRole class in the Contoso.EncodingWorker project is responsible for managing the two queues.

public override void Run()
{
    ...
    var contosoEncodingQueue = 
        this.container.Resolve<IAzureQueue<EncodeVideoMessage>>("Standard");
    var contosoEncodingCompleteQueue = 
        this.container.Resolve<IAzureQueue<JobNotificationMessage>>("Standard");

     BatchMultipleQueueHandler
         .For(contosoEncodingQueue, GetStandardQueueBatchSize())
         .Every(TimeSpan.FromSeconds(GetSummaryUpdatePollingInterval()))
         .WithLessThanTheseBatchIterationsPerCycle(
              GetMaxBatchIterationsPerCycle())
         .Do(this.container.Resolve<EncodeVideoCommand>());

     BatchMultipleQueueHandler
         .For(contosoEncodingCompleteQueue, GetStandardQueueBatchSize())
         .Every(TimeSpan.FromSeconds(GetSummaryUpdatePollingInterval()))
         .WithLessThanTheseBatchIterationsPerCycle(
             GetMaxBatchIterationsPerCycle())
         .Do(this.container.Resolve<JobNotificationCommand>());
     ...
}

This method sets up two BatchMultipleQueueHandlers to process the ContosoEncodingQueue and the ContosoJobNotificationQueue. The BatchMultipleQueueHandler<T> class implements the For, Every, and Do methods. The Do method in turn calls the Cycle method, which calls the PreRun, Run, and PostRun methods of the batch command instance (any command which derives from IBatchCommand). Therefore, the first BatchMultipleQueueHandler polls the ContosoEncodingQueue every 10 seconds and for every EncodingVideoMessage on the queue, runs the PreRun, Run, and PostRun methods of the EncodeVideoCommand instance. The second BatchMultipleQueueHandler polls the ContosoJobNotificationQueue every 10 seconds and for every JobNotificationMessage on the queue, runs the PreRun, Run, and PostRun methods of the JobNotificationCommand instance. The 10 second time interval is set by the SummaryUpdatePollingInterval constant stored in configuration, and is retrieved by the GetSummaryUpdatePollingInterval method in the WorkerRole class. After the Run method of a batch command has executed the message is deleted from the appropriate queue.

Therefore, when the PublishAsset method of the EncodingService class places an EncodeVideoMessage, containing the details of the video to be encoded, onto the ContosoEncodingQueue, when the queue is polled the Run method of the EncodeVideoCommand class is invoked.

public bool Run(EncodeVideoMessage message)
{
    var encodingRequest = new EncodingRequest() 
    { 
        AssetId = message.AssetId, 
        ClipStartTime = message.ClipStartTime,
        ClipEndTime = message.ClipEndTime,
        IncludeThumbnails = message.IncludeThumbnails, 
        Resolution = message.Resolution,
        VideoId = message.VideoId
    };

    this.encodingService.EncodeAsset(encodingRequest);
    return true;
}

This method converts the EncodeVideoMessage to a new EncodingRequest instance and then calls the EncodeAsset method of the EncodingService class. After the Run method has executed the EncodeVideoMessage is deleted from the ContosoEncodingQueue.

Creating the video encoding pipeline for Azure Media Services

The EncodeAsset method retrieves the media asset to be encoded and then creates a new IJob instance and gets the media encoder from the context, before creating a new instance of the EncodingPipeline class. The EncodingPipeline is used to place video encoding steps into a pipeline. The following figure shows an overview of the steps in the EncodingPipeline.

An overview of the steps in the EncodingPipeline

An overview of the steps in the EncodingPipeline

The pipeline consists of three steps:

  1. A VideoEncodingPipelineStep.
  2. A ThumbnailEncodingPipelineStep.
  3. A ClipEncodingPipelineStep.
Dn735907.note(en-us,PandP.10).gifMarkus says:
Markus When an encoding job completes there is no information in the tasks or output assets associated with the job that identify what it is. By using an encoding pipeline, and pipeline steps, you are able to append a suffix to each output asset and then match them up when the encoding completes.

The following code example shows how the EncodeAsset method in the EncodingService class creates the encoding pipeline.

public async Task EncodeAsset(EncodingRequest encodingRequest)
{
    ...
    // create a new instance of the encoding pipeline
    EncodingPipeline encodingPipeline = new EncodingPipeline();
            
    // add the video to the encoding pipeline
    VideoEncodingPipelineStep videoEncodingStep = 
        new VideoEncodingPipelineStep(inputAsset, encodingRequest.Resolution);
    encodingPipeline.AddStep(videoEncodingStep);

    if (encodingRequest.IncludeThumbnails)
    {
        // add the thumbnails to the encoding pipeline
        ThumbnailEncodingPipelineStep thumbnailEncodingStep = 
            new ThumbnailEncodingPipelineStep(inputAsset);
        encodingPipeline.AddStep(thumbnailEncodingStep);
    }

    if(encodingRequest.ClipEndTime.Ticks > 0)
    {
        ClipEncodingPipelineStep clipEncodingStep = new ClipEncodingPipelineStep(
            inputAsset, encodingRequest.ClipStartTime, 
            encodingRequest.ClipEndTime, encodingRequest.Resolution);
        encodingPipeline.AddStep(clipEncodingStep);
    }

    // configure the job; adds the steps as tasks to the job
    encodingPipeline.ConfigureJob(job, mediaProcessor);
    ...
}

An EncodingPipeline instance will always have a VideoEncodingPipelineStep. However, a ThumbnailEncodingPipelineStep will only be added to the EncodingPipeline if the IncludeThumbnails property of the EncodingRequest is set to true. Similarly, a ClipEncodingPipelineStep will only be added to the EncodingPipeline if the ClipEndTime.Ticks property of the EncodingRequest is greater than zero.

Dn735907.note(en-us,PandP.10).gifMarkus says:
Markus The VideoEncodingPipelineStep, ThumbnailEncodingPipelineStep, and ClipEncodingPipelineStep classes all implement the IEncodingPipelineStep interface, which specifies that implementing classes must provide the ConfigureStep and ProcessOutput methods.

The following figure shows the methods involved in configuring the EncodingPipeline steps.

The methods involved in configuring the EncodingPipline steps

The methods involved in configuring the EncodingPipline steps

The ConfigureJob method of the EncodingPipeline class is shown in the following code example.

public void ConfigureJob(IJob job, IMediaProcessor mediaProcessor)
{
    ...
    foreach(IEncodingPipelineStep step in this.steps)
    {
        step.ConfigureStep(job, mediaProcessor);
    }
}

This method simply calls the ConfigureStep method of any of the added IEncodingPipelineSteps. The overall effect is to add the steps as tasks to the job.

Configuring the video encoding pipeline step

An EncodingPipeline will always contain a VideoEncodingPipelineStep, which is responsible for encoding a video.

The VideoEncodingPipelineStep class defines a dictionary that contains the encoding presets that can be chosen when uploading a video using the client apps. The dictionary specifies four encoding presets:

Preset

Encoder task preset

1080p

H264AdaptiveBitrateMP4Set1080p

720p

H264AdaptiveBitrateMP4Set720p

480p 16x9

H264AdaptiveBitrateMP4SetSD16x9

480p 4x3

H264AdaptiveBitrateMP4SetSD4x3

These presets produce assets at different resolutions and aspect ratios for delivery via one of many adaptive streaming technologies after suitable packaging. If no encoding preset is specified the pipeline defaults to using the 720p preset.

Not all videos can be encoded using these presets, for example low bitrate videos. In such cases you should create custom encoding presets.

The following code example shows the ConfigureStep method of the VideoEncodingPiplineStep class.

public void ConfigureStep(IJob job, IMediaProcessor mediaProcessor)
{
    ITask encodingTask = job.Tasks.AddNew(
        this.inputAsset.Name + EncodingTaskSuffix,
        mediaProcessor,
        this.encodingPreset,
        TaskOptions.ProtectedConfiguration);
    encodingTask.InputAssets.Add(this.inputAsset);
    encodingTask.OutputAssets.AddNew(this.inputAsset.Name + EncodingOuputSuffix, 
        AssetCreationOptions.None);
}

The method declares a task, passing the task a name made up of the input asset name with "_EncodingTask" appended to it, a media processor instance, a configuration string for handling the processing job, and a TaskCreationOptions setting that specifies that the configuration data should be encrypted. The task is then added to the Tasks collection of the job. An input asset is then specified for the task, along with an output asset whose filename is made up of the input asset name with "_EncodingOutput" appended to it.

Dn735907.note(en-us,PandP.10).gifMarkus says:
Markus By default, all assets are created as storage encrypted assets. To output an unencrypted asset for playback you must specify AssetCreationOptions.None.

Configuring the thumbnail encoding pipeline step

The ThumbnailEncodingPipelineStep class is responsible for producing thumbnail image files from a video file. In the Contoso video apps these thumbnail images are used to represent each video on the main page.

A ThumbnailEncodingPipelineStep will only be added to the EncodingPIpeline if the IncludeThumbnails property of the EncodingRequest is set to true. In the Contoso web service this property is always set to true.

The following code example shows the ConfigureStep method of the ThumbnailEncodingPipelineStep class.

public void ConfigureStep(IJob job, IMediaProcessor mediaProcessor)
{
    ITask thumbnailTask = job.Tasks.AddNew(
        this.inputAsset.Name + ThumbnailTaskSuffix,
        mediaProcessor,
        this.thumbnailPresetXml,
        TaskOptions.ProtectedConfiguration);
    thumbnailTask.InputAssets.Add(this.inputAsset);
    thumbnailTask.OutputAssets.AddNew(this.inputAsset.Name + 
        ThumbnailOutputSuffix, AssetCreationOptions.None);
}

This method declares a task, passing the task a name made up of the input asset name with "_ThumbnailTask" appended to it, a media processor instance, a custom configuration XML preset for handling the processing job, and a TaskCreationOptions setting that specifies that the configuration data should be encrypted. The custom configuration XML preset specifies the settings to use when creating the task. The task is then added to the Tasks collection of the job. An input asset is then specified for the task, along with an output asset whose filename is made up of the input asset name with "_ThumbnailOutput" appended to it. In order to output an unencrypted asset the AssetCreationOptions.None enumeration value is specified.

The following code example shows the XML configuration preset used to create thumbnails.

<?xml version="1.0" encoding="utf-8"?>
<Thumbnail Size="50%,*" Type="Jpeg" 
    Filename="{OriginalFilename}_{Size}_{ThumbnailTime}_{ThumbnailIndex}_{Date}
        _{Time}.{DefaultExtension}">
    <Time Value="10%" />
</Thumbnail>

There are two primary elements:

  • The <Thumbnail> element that specifies general settings for the thumbnail image that will be generated.
  • The <Time> element that specifies the time in the source video stream from which a thumbnail will be generated.

The <Thumbnail> element specifies that the generated thumbnail should be a JPEG that's 50% of the height of the video, with the aspect ratio maintained. A template is also specified for producing the thumbnail filename. The <Time> element specifies that the thumbnail will be generated from the video data 10% of the way through the video stream. For more information about customizing the settings of a thumbnail file see "Task Preset for Thumbnail Generation."

Dn735907.note(en-us,PandP.10).gifPoe says:
Poe Although we only generate one thumbnail image per video, the CMS allows multiple thumbnail URLs for each video to be stored in the database.

Configuring the clip encoding pipeline step

The ClipEncodingPipelineStep class is responsible for producing a clip (a short segment of video) from the video being encoded.

Dn735907.note(en-us,PandP.10).gifChristine says:
Christine The Contoso Video web client is the only client that demonstrates producing clips from a video.

A ClipEncodingPipelineStep will only be added to the EncodingPipeline if the ClipEndTime.Ticks property of the EncodingRequest is greater than zero.

The ClipEncodingPipelineStep class defines a dictionary that contains the encoding presets that can be chosen when uploading a video using the client apps. The dictionary specifies the same four encoding presets that are used by the VideoEncodingPipelineStep class. Therefore, when a user selects an encoding preset it is used by both the VideoEncodingPipelineStep class and the ClipEncodingPipelineStep class, with the ClipEncodingPipelineStep class also defaulting to using the 720p preset if no encoding preset is specified.

The following code example shows the ConfigureStep method of the ClipEncodingPipelineStep class.

public void ConfigureStep(IJob job, IMediaProcessor mediaProcessor)
{
    var clipXml = this.clipPresetXml.Replace("%startTime%", 
        clipStartTime.ToString( @"hh\:mm\:ss"));
    clipXml = clipXml.Replace("%endTime%", this.clipEndTime.ToString( 
        @"hh\:mm\:ss"));

    ITask clipTask = job.Tasks.AddNew(
        this.inputAsset.Name + ClipTaskSuffix,
        mediaProcessor,
        clipXml,
        TaskOptions.ProtectedConfiguration);

    clipTask.InputAssets.Add(this.inputAsset);
    clipTask.OutputAssets.AddNew(this.inputAsset.Name + ClipOuputSuffix, 
        AssetCreationOptions.None);
}

This method updates the start and end time in the clip XML preset data, with the times specified by the user. The clip XML preset data was retrieved by the ClipEncodingPipelineStep constructor. The method then declares a task, passing the task a name made up of the input asset name with "_ClipTask" appended to it, a media processor instance, a configuration string for handling the processing job, and a TaskCreationOptions setting that specifies that the configuration data should be encrypted. The task is then added to the Tasks collection of the job. An input asset is then specified for the task, along with an output asset whose filename is made up of the input asset name with "_ClipOutput" appended to it. In order to output an unencrypted asset the AssetCreationOptions.None enumeration value is specified.

Handling job notifications from Azure Media Services

Once the EncodingPipline, and hence the job, has been configured it's added to the ContosoJobNotificationQueue, as shown in the following code example.

public async Task EncodeAsset(EncodingRequest encodingRequest)
{
    ...
    // create a NotificationEndPoint queue based on the endPointAddress
    string endPointAddress = CloudConfiguration
        .GetConfigurationSetting("ContosoJobNotificationQueueName");

    // setup the notificationEndPoint based on the queue and endPointAddress
    this.notificationEndPoint = 
        this.context.NotificationEndPoints.Create(Guid.NewGuid().ToString(),  
            NotificationEndPointType.AzureQueue, endPointAddress);

    if (this.notificationEndPoint != null)
    {
        job.JobNotificationSubscriptions
            .AddNew(NotificationJobState.FinalStatesOnly, 
                 this.notificationEndPoint);
        await job.SubmitAsync().ConfigureAwait(false);

        // save the job information to the CMS database
        var encodingJob = new EncodingJob()
        {
            EncodingJobUuId = job.Id,
            EncodingTasks = new List<EncodingTask>(),
            VideoId = encodingRequest.VideoId
        };

        foreach(var task in job.Tasks)
        {
            var encodingTask = new EncodingTask() { EncodingTaskUuId = task.Id };
            foreach(var asset in task.InputAssets)
            {
                encodingTask.AddEncodingAsset(new EncodingAsset() 
                    { EncodingAssetUuId = asset.Id, IsInputAsset = true });
            }

            encodingJob.EncodingTasks.Add(encodingTask);
        }

        await this.jobRepository.SaveJob(encodingJob).ConfigureAwait(false);
        await this.UpdateEncodingStatus(job,EncodingState.Encoding)
            .ConfigureAwait(false);
    }
}

This code first retrieves the endpoint address for the ContosoJobNotificationQueue from the configuration file. This queue will receive notification messages about the encoding job, with the JobNotificationMessage class mapping to the notification message format. Therefore, messages received from the queue can be deserialized into objects of the JobNotificationMessage type. The notification endpoint that is mapped to the queue is then created, and attached to the job with the call to the AddNew method. NotificationJobState.FinalStatesOnly is passed to the AddNew method to indicate that we are only interested in the final states of the job processing.

If NotificationJobState.All is passed you will receive all the state changes (Queued -> Scheduled -> Processing -> Finished). However, because Azure Storage Queues don't guarantee ordered delivery it would be necessary to use the Timestamp property of the JobNotificationMessage class to order messages. In addition, duplicate notification messages are possible, so the ETag property on the JobNotificationMessage can be used to query for duplicates.
It is possible that some state change notifications will be skipped.

Note

While the recommended approach to monitor a job's state is by listening to notification messages, an alternative is to check on a job's state by using the IJob.State property. However, a notification message about a job's completion could arrive before the IJob.State property is set to Finished.

The job is then asynchronously submitted, before the job information is saved to the CMS database, with an EncodingJob object (which contains EncodingTask and EncodingAsset objects) representing the job information. Finally, the UpdateEncodingStatus method is called to update the EncodingState for the video from NotStarted to Encoding (the Publish method in the VideosController class was responsible for setting the EncodingState for a newly uploaded video to NotStarted). For more information about how the repository pattern is used to store information in the CMS database, see "Appendix A – The Contoso Web Service."

Dn735907.note(en-us,PandP.10).gifMarkus says:
Markus The EncodingState enumeration is defined in the Contoso.Domain project and has four possible values – NotStarted, Encoding, Complete, and Error.

The ContosoJobNotificationQueue is polled every 10 seconds to examine the state of the job. This process is managed by the Run method in the WorkerRole class in the Contoso.EncodingWorker project. When the queue is polled and a JobNotificationMessage is received the Run method of the JobNotificationCommand class is invoked.

public bool Run(JobNotificationMessage message)
{
    var encodingJobComplete = new JobNotification()
    {
        EventTypeDescription = message.EventType,
            JobId = (string)message.Properties.Where(j => j.Key ==  
            "JobId").FirstOrDefault().Value,
            OldJobStateDescription = (string)message.Properties.Where(j => 
            j.Key == "OldState").FirstOrDefault().Value,
            NewJobStateDescription = (string)message.Properties.Where(j => 
            j.Key == "NewState").FirstOrDefault().Value
    };

    this.encodingService.ProcessJobNotification(encodingJobComplete);

    return true;
}

This method converts the JobNotificationMessage to a new JobNotification instance and then calls the ProcessJobNotification method of the EncodingService class. After the Run method has executed the JobNotificationMessage is deleted from the ContosoJobNotificationQueue.

The following code example shows the ProcessJobNotificationMethod in the EncodingService class.

public async Task ProcessJobNotification(JobNotification jobNotification)
{
    if (jobNotification.EventTypeDescription != "JobStateChange")
    {
        return;
    }

    JobState newJobState = (JobState)Enum.Parse(typeof(JobState), 
        jobNotification.NewJobStateDescription);
            
    var job = this.context.Jobs.Where(j => 
              j.Id == jobNotification.JobId).SingleOrDefault();        
    if(job == null)
    {
        return;
    }

    switch (newJobState)
    {
        case JobState.Finished:
            await this.ProcessEncodingOutput(job).ConfigureAwait(false);
            break;
        case JobState.Error:
            await this.UpdateEncodingStatus(job, EncodingState.Error)
                .ConfigureAwait(false);
            break;
    }
}

When this method is first called the EventTypeDescription property of the JobNotification instance will be set to NotificationEndPointRegistration. Therefore the method will return.

Dn735907.note(en-us,PandP.10).gifMarkus says:
Markus The JobState enumeration is defined in the Microsoft.WindowsAzure.MediaServices.Client namespace and has seven possible values – Queued, Scheduled, Processing, Finished, Error, Canceled, and Canceling.

When the JobState of the encoding Job changes, a new JobNotificationMessage is added to the ContosoJobNotificationQueue. When the queue is polled and the message is received the Run method of the ProcessJobNotification method of the EncodingService class is invoked again. In turn this calls the ProcessJobNotification method of the EncodingService class again. On this call the EventTypeDescription property of the JobNotification instance will be set to JobStateChange. Therefore the job details are retrieved from the CMS database and the ProcessEncodingOutput method will be called, provided that the JobState is Finished. Alternatively, the UpdateEncodingStatus method is called to update the EncodingState for the video to Error, if an error has occurred during the job processing.

Dn735907.note(en-us,PandP.10).gifMarkus says:
Markus Prior to the encoding job being submitted to the ContosoJobNotificationQueue the NotificationJobState.FinalStatesOnly was passed to the AddNew method to indicate that we are only interested in the final states of the job processing. This avoids the ProcessJobNotification method being called for every single JobState change.

The following code example shows the ProcessEncodingOuput method in the EncodingService class.

private async Task ProcessEncodingOutput(IJob job)
{
    // retrieve the job from the CMS database
    var encodingJob = await this.jobRepository.GetJob(job.Id) 
        .ConfigureAwait(false);

    // retrieve the video detail from the CMS database
    var videoDetail = await this.videoRepository.GetVideo(encodingJob.VideoId)
        .ConfigureAwait(false);
    EncodingPipeline pipeline = new EncodingPipeline();
            
    // process the output from the job
    pipeline.ProcessJobOutput(job, this.context, encodingJob, videoDetail);

    // save the into to the CMS database
    await this.videoRepository.SaveVideo(videoDetail).ConfigureAwait(false);
    await this.jobRepository.SaveJob(encodingJob).ConfigureAwait(false);
}

This method retrieves the job and video details from the CMS database, before creating a new instance of the EncodingPipeline class in order to process the output assets from the job. Finally, the updated job and video details are saved to the CMS database.

Processing the output assets from the Azure Media Services encoding job

The following figure shows the methods involved in processing output assets from Media Services in each step of the EncodingPipeline.

The methods involved in processing output assets in each step of the EncodingPipeline

The methods involved in processing output assets in each step of the EncodingPipeline

The ProcessJobOutput method in the EncodingPipeline class is responsible for processing the encoding job retrieved from the CMS database. The following code example shows this method.

public void ProcessJobOutput(IJob job, CloudMediaContext context,
    EncodingJob encodingJob, VideoDetail videoDetail)
{
    ...
    foreach(var task in job.Tasks)
    {
        var encodingTask = encodingJob.EncodingTasks.SingleOrDefault(
            t => t.EncodingTaskUuId == task.Id);
        if(encodingTask != null)
        {
            foreach(var outputAsset in task.OutputAssets)
            {
                encodingTask.AddEncodingAsset(new EncodingAsset() 
                    { EncodingAssetUuId = outputAsset.Id, IsInputAsset = false });

                foreach(var step in pipelineSteps)
                {
                    if(outputAsset.Name.Contains(step.Key))
                    {
                        var pipelineStep = (IEncodingPipelineStep)Activator
                            .CreateInstance(step.Value);
                        pipelineStep.ProcessOutput(context, outputAsset, 
                            videoDetail);
                    }
                }
            }
        }
    }
    videoDetail.EncodingStatus = EncodingState.Complete;
}

Each encoding job contains a number of encoding tasks, with each encoding task potentially resulting in a number of assets being output. Therefore, this code loops through each task in the job and then each output asset in the task, and then each step in the pipeline to perform string matching on the suffix (_EncodingOutput, _ThumbnailOutput, or _ClipOutput) that is appended to each output asset. Then, the ProcessOutput method is called on the newly instantiated pipeline step that matches up to the output asset suffix.

Finally, the EncodingStatus of the video is set to Complete.

Processing the video encoding pipeline step output assets

As previously mentioned, the VideoEncodingPipelineStep class is responsible for encoding a video. Therefore, its ProcessOutput method, which is shown in the following code example, is responsible for processing the adaptive bitrate encoded video assets.

public void ProcessOutput(CloudMediaContext context, IAsset outputAsset, 
    VideoDetail videoDetail)
{
    context.Locators.Create(LocatorType.OnDemandOrigin, outputAsset, 
        AccessPermissions.Read, TimeSpan.FromDays(30));
    context.Locators.Create(LocatorType.Sas, outputAsset, AccessPermissions.Read, 
        TimeSpan.FromDays(30));

    var mp4AssetFiles = outputAsset.AssetFiles.ToList().Where(
        f => f.Name.EndsWith(".mp4", StringComparison.OrdinalIgnoreCase));
    var xmlAssetFile = outputAsset.AssetFiles.ToList().SingleOrDefault(
        f => f.Name.EndsWith("_manifest.xml", 
        StringComparison.OrdinalIgnoreCase));

    Uri smoothStreamingUri = outputAsset.GetSmoothStreamingUri();
    Uri hlsUri = outputAsset.GetHlsUri();
    Uri mpegDashUri = outputAsset.GetMpegDashUri();

    foreach (var mp4Asset in mp4AssetFiles)
    {
        ILocator originLocator = outputAsset.Locators.ToList().Where(
            l => l.Type == LocatorType.OnDemandOrigin).OrderBy(
            l => l.ExpirationDateTime).FirstOrDefault();
        var uri = new Uri(string.Format(CultureInfo.InvariantCulture,
            BaseStreamingUrlTemplate, originLocator.Path.TrimEnd('/'),
            mp4Asset.Name), UriKind.Absolute);

        videoDetail.AddVideo(new VideoPlay() 
        { 
            EncodingType = "video/mp4", 
            Url = uri.OriginalString, 
            IsVideoClip = false 
        });
    }

    videoDetail.AddVideo(new VideoPlay() 
    { 
        EncodingType = "application/vnd.ms-sstr+xml", 
        Url = smoothStreamingUri.OriginalString, 
        IsVideoClip = false 
    });
    videoDetail.AddVideo(new VideoPlay() 
    { 
        EncodingType = "application/vnd.apple.mpegurl", 
        Url = hlsUri.OriginalString, 
        IsVideoClip = false 
    });
    videoDetail.AddVideo(new VideoPlay() 
    { 
        EncodingType = "application/dash+xml", 
        Url = mpegDashUri.OriginalString, 
        IsVideoClip = false 
    });

    this.ParseManifestXml(xmlAssetFile.GetSasUri().OriginalString, videoDetail);
}
Dn735907.note(en-us,PandP.10).gifChristine says:
Christine The Windows Store and Windows Phone Contoso Video apps both play smooth streaming assets. The Android and iOS Contoso video apps play HLS assets. However, all apps will fall back to playing the first available multi-bitrate MP4 URL if streaming content is unavailable.

This method creates an on-demand origin locator, and a shared access signature locator, to the output asset, with both locators allowing read access for 30 days. The shared access locator provides direct access to a media file in Azure Storage through a URL. The on-demand origin locator provides access to smooth streaming or Apple HLS content on an origin server, through a URL that references a streaming manifest file.

Dn735907.note(en-us,PandP.10).gifBharath says:
Bharath When you create a locator for media content there may be a 30-second delay due to required storage and propagation processes in Azure Storage.

A list of the multi-bitrate MP4 files produced by the encoding job is then created, along with the XML file that references the MP4 collection. Extension methods then generate URIs to smooth streaming, HLS, MPEG-DASH, and MP4 progressive download versions of the output asset, with the smooth streaming, HLS, and MPEG-DASH content being packaged on demand. For more information about packaging see "Dynamic packaging."

Media Services uses the value of the IAssetFile.Name property when building URLs for streaming content. Therefore, the value of the Name property cannot have any of the percent-encoding reserved characters (!#$&'()*+,/:;=?@[]). In addition, there must be only one '.' for the filename extension. For more information see "Percent-encoding."

The URIs to the different versions of the content (smooth streaming, HLS, MPEG-DASH, progressive MP4s) are then stored in new VideoPlay instances, which also specify the EncodingType of the content. The VideoPlay instances are then added to the VideoDetail object. Through this mechanism encoded content can be played back in client apps across a variety of devices.

Note

The URIs for the encoded assets can be very long. In this guide the URIs have been left unaltered so that you can understand how Media Services functionality works. However, in your own application you may choose to have a mechanism for handling long URIs, such as retrieving a base URI for an asset and then retrieving relative URIs for each asset file.

Processing the thumbnail encoding pipeline step output assets

As previously mentioned, the ThumbnailEncodingPipelineStep class is responsible for producing thumbnail images from a video file. Therefore, its ProcessOutput method, which is shown in the following code example, is responsible for processing the thumbnail images produced from a video file.

public void ProcessOutput(CloudMediaContext context, IAsset outputAsset, 
    VideoDetail videoDetail)
{
    context.Locators.Create(LocatorType.OnDemandOrigin, outputAsset, 
        AccessPermissions.Read, TimeSpan.FromDays(30));
    context.Locators.Create(LocatorType.Sas, outputAsset, AccessPermissions.Read, 
        TimeSpan.FromDays(30));

    foreach (var assetFile in outputAsset.AssetFiles)
    {
        videoDetail.AddThumbnailUrl(new VideoThumbnail() 
            { Url = assetFile.GetSasUri().OriginalString });
    }
}

This method creates a shared access signature locator to the output asset that allows read access for 30 days. The shared access locator provides direct access to a media file in Azure storage through a URL. A VideoThumbnail instance is then created for each thumbnail image that uses the shared access signature locator to specify a URL to a thumbnail image, and is added to the VideoDetail instance.

Note

The URIs for thumbnails can be very long. In this guide the URIs have been left unaltered so that you can understand how Media Services functionality works. However, in your own application you may choose to have a mechanism for handling long URIs, such as retrieving a base URI for an asset and then retrieving relative URIs for each asset file.

Processing the clip encoding pipeline step output assets

As previously mentioned, the ClipEncodingPipelineStep class is responsible for producing a clip from the video being encoded. Therefore, its ProcessOutput method, which is shown in the following code example, is responsible for processing the clip produced from a video file.

public void ProcessOutput(CloudMediaContext context, IAsset outputAsset, 
    VideoDetail videoDetail)
{
    context.Locators.Create(LocatorType.OnDemandOrigin, outputAsset, 
        AccessPermissions.Read, TimeSpan.FromDays(30));
    context.Locators.Create(LocatorType.Sas, outputAsset, AccessPermissions.Read, 
        TimeSpan.FromDays(30));

    var mp4AssetFiles = outputAsset.AssetFiles.ToList().Where(
        f => f.Name.EndsWith(".mp4", StringComparison.OrdinalIgnoreCase));
    List<Uri> mp4ProgressiveDownloadUris = mp4AssetFiles.Select(
        f => f.GetSasUri()).ToList();

    mp4ProgressiveDownloadUris.ForEach(v => videoDetail.AddVideo(new VideoPlay() 
    { 
        EncodingType = "video/mp4", 
        Url = v.OriginalString, 
        IsVideoClip = true 
    }));
}

This method creates a shared access signature locator to the output asset that allows read access for 30 days. The shared access locator provides direct access to a media file in Azure Storage through a URL. The collection of multi-bitrate MP4 clips produced by the encoding job are then retrieved, before URIs to each file are generated by the GetSasUri extension method. Finally, the URIs to each MP4 file are stored in separate VideoPlay instances, which also specifies the EncodingType of the content. The VideoPlay instances are then added to the VideoDetail object.

Note

The URIs to the encoded assets can be very long. In this guide the URIs have been left unaltered so that you can understand how Media Services functionality works. However, in your own application you may choose to have a mechanism for handling long URIs, such as retrieving a base URI for an asset and then retrieving relative URIs for each asset file.

Summary

This chapter has described how the Contoso developers incorporated Media Services' encoding and processing functionality into their web service. It summarized the decisions that they made in order to support their business requirements, and how they designed the code that performs the encoding process.

In this chapter, you saw how to use the Azure Media Encoder to encode uploaded media for delivery. The chapter also discussed how to scale encoding jobs by reserving encoding units that allow you to have multiple encoding tasks running concurrently.

The following chapter discusses the final step in the Media Services workflow – delivering and consuming media.

More information

Next Topic | Previous Topic | Home | Community