June 2010

Volume 25 Number 06

Express Yourself - Encoding Videos Using Microsoft Expression Encoder 3 SDK

By Adam Miller | June 2010

In one of my favorite movie scenes of all time, Clark W. Griswold (Chevy Chase in “Christmas Vacation”) gets trapped in his attic while hiding Christmas presents. To keep warm, he dons pink gloves, a green hat and a brown fur stole pulled from a dusty chest. At the bottom of the chest he finds home movies from his youth, and passes the time watching them (with tears in his eyes), using an old film projector.

Home movies have come a long way since then, but people still have to deal with one of the same issues: How do I show my movie to friends and family? Sites like YouTube, Vimeo and Facebook make sharing easy; but at 100-plus megabytes per minute for high-definition video, getting the data to those sites can be a time-consuming task. Chances are, your portable device, gaming system or home theater media center won’t even play the file. To solve these problems, you need to convert the video to another format. This process is known as encoding.

About Expression Encoder

The Microsoft video encoding tool, Expression Encoder 3, is part of the Expression family of products for creating compelling UIs for Web and desktop applications. Expression Encoder comes in free and paid versions; the paid version is part of both Expression Studio 3 Suite ($599) and Expression Web 3 Suite ($149). The free download does not support encoding to Silverlight Smooth Streaming or H.264 video or using H.264 video as a source, but it does let you encode to Windows Media Video files and it has a nice SDK. Many of the code samples in this article require the paid 
version of the program; however, all the code samples will build in the free version of the SDK. You’ll just receive an InvalidMediaException or a FeatureNotAvailableException when running.

If you aren’t ready to purchase Expression Suite, you can get started with Expression Encoder by downloading the free version from microsoft.com/expression. It’s also available as part of Expression Professional MSDN Subscription, or Visual Studio Professional with MSDN Premium Subscription. Keep in mind that $149 for a professional video encoding software application with this feature set, wide range of input formats and supported output targets is a relative steal. Similar video encoding solutions can cost upward of $3,000.

No matter which version you choose, you’ll want to install the Encoder 3 QFE. It adds support for additional file types and input devices, improves performance in certain situations, and includes minor bug fixes. The QFE installer can be found on the Expression Encoder page on the Microsoft Expression Web site.

Supported Formats

The following are supported input video formats:

  • Windows Media Video (.wmv)
  • DVD video (.vob)
  • MPEG (.mpg, .mpeg)
  • Audio Video Interleave (.avi)
  • Microsoft Digital Video Recording (.dvr-ms)

The paid version adds the following formats (plus a handful of other formats):

  • MPEG-4 (.mp4, .m4v)
  • Quicktime (.mov)
  • AVC HD (.mts)
  • Mobile Device Video (.3gp, .3g2)

For the most part, Expression Encoder supports any media file Windows Media Player can play. If you want to support even more files (and be able to play them in Windows Media Player), you can install a codec pack such as K-Lite Codec Pack (codecguide.com) or Community Combined Codec Pack (cccp-project.net). Both are based on the open source ffdshow project and will add support for VP6-encoded Flash (.flv) files, H.264 video in the Matroska (.mkv) container, and Ogg (.ogg) video files.

The free version of Expression Encoder supports only the Microsoft VC-1 as an output codec. However, this still allows you to encode videos for Silverlight (single bitrate only), Xbox 360, Zune and Zune HD. Also, the VC-1 codec is no slouch; its compression is as good (if not better in certain situations) as H.264. Upgrading to the paid version lets you output Silverlight Smooth Streaming video (multi-bitrate) as well as H.264, which means you can encode videos playable on the iPhone, PS3, Flash player (version 10 supports H.264/.mp4) and countless other devices.

Encoding 101

Supporting certain output devices requires changing some of the video profile settings, so you’ll need to understand the basics of video encoding. Re-encoding video is actually the process of decompressing a video and re-compressing it using another codec or manually changing attributes such as size, aspect ratio or frame rate. Although there are lossless compression methods, they’re rarely used because the resulting video files are still quite large. So in order to reduce the amount of space necessary to store (and therefore transfer) the video, an algorithm, known as a codec, is used to compress and decompress the video. The compressed video stream is then stored according to a specification known as a container (such as WMV or MP4). Containers and codecs are often not an exclusive contract, so although H.264 is the most common codec found in the MP4 container, other codecs could be used. 

Bitrate, expressed in kilobits per second, defines how much data should be used to store the compressed video. Reducing the bitrate tells the encoder to compress the video at a higher rate, degrading video quality. There are different ways to tell the encoder how to determine the video’s bitrate. The simplest way is to use a constant bitrate (CBR), which forces the encoder to use the same amount of data for every second of video. A variable bitrate (VBR) can be used to tell the encoder what the overall bitrate of the file should be, but the encoder is allowed to raise or lower the bitrate based on the amount of data needed for a particular section of the video. Variable constrained bitrate is similar to unconstrained VBR, except that you give not only an average bitrate to use, but also a maximum bitrate that can’t be exceeded.

Variable constrained bitrate is useful when encoding Silverlight Smooth Streaming video. It helps ensure the bitrate doesn’t exceed the client bandwidth, forcing the client to request a lower-quality stream. CBR and VBR indicate the amount of compression to use by specifying an overall video file size.

Alternatively, you can tell the encoder to use a quality-based VBR. Instead of specifying the overall size of the video, you specify a percentage of quality (that is, how much data) of the decompressed source video to retain. It takes less data to retain good quality for a cartoon, for example, than for a nature or action-filled video. So if you have a high-quality source and your goal is to convert the source to another format and retain optimal quality, consider using quality-based VBR. These definitions are just the tip of the iceberg, but they are core to choosing your output settings. You’ll find additional encoding definitions throughout this article as they apply to code samples.

Using the SDK

To follow the code samples, you’ll want to use a good-quality video. If you don’t have any high-resolution video lying around, you can get some nice HD videos from microsoft.com/windows/windowsmedia/musicandvideo/hdvideo/contentshowcase.aspx. I’ll use the Adrenaline Rush video as the source for these examples.

After installing Expression Encoder 3, create a new Visual Studio C# Console Application project. Add references to Microsoft.Expression.Encoder.dll and Microsoft.Expression.Encoder.Utilities.dll, located at \Program Files (x86)\Microsoft Expression\Encoder 3\SDK. You’ll also need to add a reference to WindowsBase, which you’ll find in the .NET tab of the Add References dialog. Many of the classes used will be in the Microsoft.Expression.Encoder namespace, so add a using statement for it.

The first item to instantiate will be a MediaItem object. The MediaItem constructor takes a string as the only parameter to the constructor. Pass the path to the file you’re using as the source for the encoding project:

MediaItem src = new MediaItem(@"C:\WMdownloads\AdrenalineRush.wmv");

Creating a MediaItem object takes just a second or two. The SDK is doing a fair amount of work behind the scenes, though, gathering information about the source video, such as its height, width, frame rate (the frequency that individual images should be displayed on the screen) and duration. Information about the audio stream is also gathered at this time.

Next you create an instance of the Job class (which has only a parameterless constructor), and add your MediaItem to its list of MediaItems. The Job class serves as the manager for desired output formats (known as profiles):

Job job = new Job();

Now you need to tell the job which audio and video profiles to use during encoding; the easiest way is to use one of the profiles defined in the UI. To create a video for the Zune HD, for example, you can use the VC1ZuneHD preset:


Finally, specify an output directory and start the encoding process:

job.OutputDirectory = @"C:\EncodedFiles";

Your Program.cs file should be similar to Figure 1.

Figure 1 Creating a Video for Zune HD

using Microsoft.Expression.Encoder;
namespace TestApp
  class Program
      static void Main(string[] args)
          MediaItem src = new MediaItem
          Job job = new Job();
          job.OutputDirectory = @"C:\EncodedFiles";

There’s one last thing to do before running the application: If you’re using a 64-bit version of Windows, you’ll need to modify the project to build to x86. In the Visual Studio menu bar, select Project and (Project Name) Properties. In the dialog box that opens, select the build tab and change the Platform Target from “Any CPU” to “x86.”

You are now ready to run the application and create a video playable on the Zune HD. The encoding process will take a couple minutes to complete and is extremely CPU-intensive. Video encoding benefits from being a parallel computed task, so multi-core computers have a big advantage here.

Expression Encoder also includes presets for encoding to online services such as YouTube, Vimeo and Facebook. 720p video recorded from my Panasonic Lumix DMC-ZS3 digital camera consumes about 110MB per minute of recorded video. Converting the video using the YouTube HD preset (also 720p) reduces the video to just 16MB. This makes it much more efficient to upload and store locally. Converting it to an .mp4 file also makes it compatible with many more video editing programs.

Custom Settings

To manually produce the same output as the VC1ZuneHD preset, you’d need to use code similar to Figure 2 to set the video and audio profiles.For the code in Figure 2 to compile, you’ll need to add references to Microsoft.Expression.Encoder.Utilities and System.Drawing. Also add using statements for Microsoft.Expression.Encoder.Profiles and System.Drawing. The OutputFormat essentially specifies the container for the output file. I say essentially because encoding for Silverlight works just a little bit differently (as I’ll discuss shortly).

Figure 2 Video and Audio Profile Settings for Zune HD

MediaItem src = new MediaItem(@"C:\WMdownloads\AdrenalineRush.wmv");

src.OutputFormat = new WindowsMediaOutputFormat();
src.OutputFormat.VideoProfile = new AdvancedVC1VideoProfile();
src.OutputFormat.VideoProfile.Bitrate = new 
    VariableConstrainedBitrate(1000, 1500);
src.OutputFormat.VideoProfile.Size = new Size(480, 272);
src.OutputFormat.VideoProfile.FrameRate = 30;
src.OutputFormat.VideoProfile.KeyFrameDistance = new TimeSpan(0, 0, 4);
src.OutputFormat.AudioProfile = new WmaAudioProfile();
src.OutputFormat.AudioProfile.Bitrate = new 
    VariableConstrainedBitrate(128, 192);
src.OutputFormat.AudioProfile.Codec = AudioCodec.WmaProfessional;
src.OutputFormat.AudioProfile.BitsPerSample = 24;

Job job = new Job();
job.OutputDirectory = @"C:\EncodedFiles";

The VideoProfile specifies the video codec to use, along with the detailed settings to use when encoding. Similarly, the AudioProfile specifies the audio codec to use along with its settings. When constructing a VariableConstrainedBitrate, the first parameter specifies the average bitrate and the second parameter specifies the maximum bitrate. The size setting indicates the box the encoded video should fit in. The correctly scaled size for the Adrenaline Rush video is actually 480x272 to maintain the aspect ratio, but if I entered 480x480 the resulting video still would be 480x272. 

Figure 2’s KeyFrameDistance property refers to a video-encoding concept I haven’t yet discussed. The way the most video encoding works is to store only the changes from one frame to the next, rather than the entire picture for each video frame. Key frames are the frames that contain the entire image. This code will create key frames every four seconds. Key frames will be created automatically when there are large changes in the video such as a scene change, but you should also create them at pre-defined intervals to support seeking within the movie during playback.

Silverlight Smooth Streaming

Silverlight Smooth Streaming dynamically switches the bitrate of the media file being played based on current network conditions. A Smooth Streaming project consists of individual videos stored in .ismv files, as well as .ism and .ismc metadata files that support Smooth Streaming playback.

To create a Silverlight Smooth Streaming project, multiple changes must be made. First, change the KeyFrameDistance to two seconds. The video will still play if the KeyFrameDistance is left at four seconds, but you may notice hiccups in playback when the player switches bitrates. The Silverlight player will request the video in two-second chunks, so playback is more consistent if there’s a key frame at the beginning of each request. You also need to add the following line:

src.OutputFormat.VideoProfile.SmoothStreaming = true;

Setting SmoothStreaming to true tells the encoder to output the videos to .ismv files and create the .ism and ismc files. Having only one bitrate isn’t really a smooth streaming project, so to create multiple output bitrates, you need to add multiple streams to the VideoProfile. Do this using code similar to Figure 3.

Figure 3 Adding Silverlight Smooth Streaming

MediaItem src = new MediaItem(@"C:\WMdownloads\AdrenalineRush.wmv");
src.OutputFormat = new WindowsMediaOutputFormat();
src.OutputFormat.VideoProfile = new AdvancedVC1VideoProfile();
src.OutputFormat.VideoProfile.KeyFrameDistance = new TimeSpan(0, 0, 2);
src.OutputFormat.VideoProfile.SmoothStreaming = true;
src.OutputFormat.VideoProfile.Streams.Add(new StreamInfo(new 
    VariableConstrainedBitrate(2000, 3000), new Size(1280, 720)));
src.OutputFormat.VideoProfile.Streams.Add(new StreamInfo(new 
    VariableConstrainedBitrate(1400, 1834), new Size(848, 476)));
src.OutputFormat.VideoProfile.Streams.Add(new StreamInfo(new 
    VariableConstrainedBitrate(660, 733), new Size(640, 360)));
src.OutputFormat.AudioProfile = new WmaAudioProfile();
src.OutputFormat.AudioProfile.Bitrate = new 
    VariableConstrainedBitrate(128, 192);
src.OutputFormat.AudioProfile.Codec = AudioCodec.WmaProfessional;
src.OutputFormat.AudioProfile.BitsPerSample = 24;
Job job = new Job();
job.OutputDirectory = @"C:\EncodedFiles";

Here the code specifies three different bitrates and sizes to 
encode. For optimum quality, the video size needs to shrink as the bitrate is reduced. When specifying your own bitrates, you can use the IIS Smooth Streaming settings in the Expression Encoder 3 UI. Note that it’s not possible to gain quality by encoding a video at a higher resolution than the source file. And it only makes sense to encode at a higher bitrate than the source file if using a weaker compression method. If the SDK was able to determine the bitrate of the source file, it will be present in the MediaItem’s SourceVideo Profile property:

int bitrate = ((ConstantBitrate)src.SourceVideoProfile.Bitrate).Bitrate;

If the SDK couldn’t obtain the bitrate of the source file, you can get a pretty close estimate based on the file size. Here’s the formula:

Approximate bitrate in kb/s = (file size in kilobytes * 8 / video duration in seconds) - audio bitrate in kb/s

You can use the System.IO.FileInfo class to get the source-file size, and the SDK to get the duration (MediaItem.FileDuration property) and possibly the audio bitrate. If you don’t know the audio bitrate, use 128 or 160 to estimate (most audio bitrates are between 64 and 192); you may also be able to get the audio bitrate in the Windows Media Player Properties window (Press Alt to show the menu, then File | Properties).

Monitoring Progress

Because an encoding job can take hours to complete, it’s helpful to be able to see the encoding progress. The SDK provides a simple way to monitor the encoding process via an event you can add a handler for:

job.EncodeProgress += new EventHandler<EncodeProgressEventArgs>(OnProgress);

Add a method like the following to handle the event:

static void OnProgress(object sender, EncodeProgressEventArgs e)
  Console.WriteLine((100 * (e.CurrentPass - 1) + e.Progress) / e.TotalPasses + "%");

Multi-pass encoding is a new concept relevant to this code sample. When using a variable bitrate to encode, the process is done in two steps, known as passes. During the first pass, the source video is analyzed to determine which parts are most complex and would benefit from an increased bitrate. During the second pass, the video is encoded using the information obtained during the first pass. Thus, if you use a constant bitrate, there’s no need to use the CurrentPass or TotalPasses properties of the EncodeProgressEventArgs class.

Combining Videos

If you want to encode only part of a video or combine multiple videos into one, the SDK provides support. To modify the start and stop time for a source media item, you can modify the Clips property. To encode only the first six seconds of a video, use code similar to:

src.Sources[0].Clips[0].StartTime = new TimeSpan(0);
src.Sources[0].Clips[0].EndTime = new TimeSpan(0, 0, 6);

To add other videos as source files, you can append additional videos to the Sources property of your MediaItem. This will encode the source files in order to a single output file:

MediaItem src = new MediaItem(@"C:\WMdownloads\AdrenalineRush.wmv");
src.Sources.Add(new Source(@"C:\WMdownloads\Video2.wmv"));

Live Encoding

Expression Encoder also supports encoding from live sources such as a webcam. The concept (and code) is similar to encoding video files, but you use a different set of classes. These are found in the Microsoft.Expression.Encoder.Live namespace.

The first class to use is LiveJob. LiveJob works like Encoder.Job—it handles the work of encoding the video. However, in a live scenario the OutputFormat is a property of LiveJob instead of a MediaItem object (which is not necessary). When a LiveJob object is instantiated, it will look for video input devices attached to your computer and populate VideoDevices and AudioDevices properties. You can then use these as an input source for the encoder. Figure 4 shows an example.

Figure 4 Encoding Live Video

using (LiveJob job = new LiveJob())
  LiveDevice videoDevice = job.VideoDevices[0];
  LiveDevice audioDevice = job.AudioDevices[0];
  LiveDeviceSource liveSource = job.AddDeviceSource(videoDevice, 
  WindowsMediaBroadcastOutputFormat outputFormat = new 
  outputFormat.BroadcastPort = 8080;
  job.OutputFormat = outputFormat;
  Console.WriteLine("Press enter to stop encoding...");

This will start a live encoding session using a webcam (assuming you have one connected) and broadcast it on your local machine on port 8080. To view the live encoding, open Windows Media Player and select File | Open URL and enter mms://localhost:8080. After some buffering, you should see the video from your webcam, though you’ll notice a 20- to 30-second lag due to the time it takes to encode and transport the stream. You could potentially use this video as a source for Windows Media Services or IIS Media Services to broadcast to the world.

Additional Tools

If you aren’t sure whether the encoding settings you’ve chosen will give you the output quality you need, the Expression Encoder 3 UI provides a handy feature called A/B Compare. This lets you encode five seconds of video surrounding the current playback position. The encoded video will appear split-screen with your source video (see Figure 5), so you can easily compare the quality of the encoded video with the original.

Figure 5 A/B Compare in Expression Encoder 3

image: A/B Compare in Expression Encoder

You can then save the current settings as a user-defined preset by clicking Edit | Save current settings as preset. The preset will be stored as an XML file, which you can use with the SDK:


If you’re already thinking about how easy it would be to automate the video conversion process with a console application, take a look at the Convert-Media PowerShell Module for Expression Encoder, available at convertmedia.codeplex.com. This PowerShell module wraps the Expression Encoder SDK, providing a command-line encoding interface without writing any code. As with all CodePlex projects, it is open source.

Hopefully you now understand the core terminology related to video encoding and can make educated decisions on which codec and bitrate to use. You also know how to use the Expression Encoder 3 SDK to encode videos for specific targets such as Xbox 360, iPhone and Silverlight, as well as live streaming video. So don’t wait to be trapped in your attic like Clark W. Griswold to realize the value of your home videos and forgotten memories. Convert them to a format that will make them accessible to the world.

Adam Miller is a software engineer for Nebraska Global in Lincoln, Neb. You can follow Miller’s blog at blog.milrr.com.

Thanks to the following technical expert for reviewing this article: Ben Rush