January 2012

Volume 27 Number 01

Windows Phone - Using Cameras in Your Windows Phone Application

By Matt Stroshane | January 2012

Pictures can communicate with an efficiency and elegance that can’t be matched by words alone. You’ve heard that “a picture is worth a thousand words”; imagine the types of problems you could solve if your Windows Phone application had direct access to a camera. Well, starting with Windows Phone 7.5, you can begin solving those “thousand-word” problems using the on-device cameras.

In this article, I’ll introduce the front and back cameras, the camera APIs and the associated manifest capabilities, plus I’ll discuss a few different ways you can use a camera in your next Windows Phone 7.5 application. I’ll cover:

  • Capturing photos: I’ll create a very simple photo app.
  • Accessing the camera preview buffer: I’ll introduce the Camera Grayscale Sample.
  • Recording video: I’ll review the Video Recorder Sample.

You’ll need the Windows Phone SDK 7.1 to create a Windows Phone 7.5 application. The SDK includes code examples that demonstrate each of these scenarios in great detail. For more information, see the Basic Camera Sample, Camera Grayscale Sample, and the Video Recorder Sample on the Code Samples page in the SDK at code.msdn.microsoft.com/windowsapps/.

Note that this article won’t cover the camera capture task, which has been available since Windows Phone 7. Though this task is a simple way to acquire photos for your application, it doesn’t let you capture photos programmatically or access the camera preview buffer.

A Windows Phone 7.5 device can include up to two cameras, designated as primary and front-facing. The primary camera is on the back of the device and typically offers a higher resolution and more features than the front-facing camera. Neither of these cameras is required on a Windows Phone 7.5 device, so be sure to check for their presence in your code before you create your camera objects. Later on, I’ll demonstrate how to use the static IsCameraTypeSupported method for this purpose.

Many of the Windows Phone devices available in the United States include a primary camera with a 5MP or greater sensor, auto-focus and a flash. The front-facing camera is a new feature for Windows Phone 7.5.

For more information about device specifications, see the Buy tab at windowsphone.com.

Capturing Photos

You can use the same classes to access both the primary camera and the front-facing camera. As you’ll see, selecting the camera type is simply a matter of specifying a single parameter in the constructor of the PhotoCamera object. From a design perspective, however, you might want to handle interaction with the front-facing camera differently. For example, you might want to flip images from the front-facing camera to give the user a more natural “mirror-like” experience.

When capturing photos in a Windows Phone 7.5 app, you’ll work primarily with the PhotoCamera class from the Microsoft.Devices namespace. This class offers a great deal of control over the camera settings and behavior. For example, you can:

  • Activate the camera shutter with the PhotoCamera.CaptureImage method
  • Trigger auto focus with the PhotoCamera.Focus method
  • Specify picture resolution by setting the Photo-Camera.Resolution property
  • Specify the flash settings by setting the Photo-Camera.FlashMode property
  • Incorporate the hardware shutter button with events from the static CameraButtons class
  • Implement touch focus with the PhotoCamera.Focus­AtPoint method

In this article, I’ll demonstrate only the first point. For an example that shows how to do all of these, see the Basic Camera Sample from the Windows Phone SDK code samples page.

Note that even when a camera is available, it might not support all of these APIs. The following approaches can help determine what is available:

  • Camera: Use the PhotoCamera.IsCameraTypeSupported static method.
  • Auto focus: Use the PhotoCamera.IsFocus­­-Supported method.
  • Picture resolution settings: Check the Photo­-Camera.AvailableResolutions collection.
  • Flash settings: Use the PhotoCamera.IsFlashMode­Supported method.
  • Point-specific focus: Use the PhotoCamera.IsFocus­AtPointSupported method.

To give you an idea of how to capture photos in your app, let’s walk through a simple app that captures a photo when you touch the viewfinder and then saves it to the Camera Roll folder in the Pictures Hub.

Start with a standard Windows Phone project, using the Windows Phone Application template. You can write Windows Phone 7.5 apps in C# or Visual Basic. This example will use C#.

I’ll simplify this example by limiting the app to a landscape-only orientation and using just the primary camera. Managing orientation for the device and two cameras, each pointed in different directions, can become confusing pretty quickly; I recommend testing with a physical device to ensure you achieve the desired behavior. I’ll cover orientation in more detail later.

On MainPage.xaml, update the PhoneApplicationPage attributes as follows:

SupportedOrientations="Landscape" Orientation="LandscapeLeft"

Then, replace the contents of the LayoutRoot grid with Canvas and TextBlock as shown in Figure 1.

Figure 1 Adding a Canvas and a TextBlock

<Canvas x:Name="viewfinderCanvas" Width="640" Height="480" Tap="viewfinder_Tapped">
  <Canvas.Background>
    <VideoBrush x:Name="viewfinderBrush">
      <VideoBrush.RelativeTransform>
        <CompositeTransform
          x:Name="viewfinderTransform"
          CenterX="0.5"
          CenterY="0.5"/>
      </VideoBrush.RelativeTransform>
    </VideoBrush>
  </Canvas.Background>
</Canvas>
<TextBlock Width="626" Height="40"
           HorizontalAlignment="Left"
           Margin="8,428,0,0"
           Name="txtMessage"
           VerticalAlignment="Top"
           FontSize="24"
           FontWeight="ExtraBold"
           Text="Tap the screen to capture a photo."/>

The XAML in Figure 1 uses a VideoBrush in a Canvas to display the viewfinder and provides a TextBlock for communicating with the user. The camera sensor has a 4:3 aspect ratio, and the screen aspect ratio is 15:9. If you don’t specify a canvas size with the same 4:3 ratio (640x480), the image will appear stretched across the screen.

In the Canvas element, the Tap attribute specifies the method to call when the user taps the screen—the viewfinder_Tapped method. To display the image stream from the camera preview buffer, a Video­Brush named viewfinderBrush is specified as the background of the canvas. Like a viewfinder from a single-lens reflex (SLR) camera, viewfinderBrush lets you see the camera preview frames. The transform in viewfinderBrush essentially “pins” the viewfinder to the center of the canvas as it’s rotated. I’ll discuss the code behind this XAML in the following sections. Figure 2 shows the Simple Photo App UI.

The Simple Photo App UI
Figure 2 The Simple Photo App UI

Initializing and Releasing the Camera To capture photos and save them to the Camera Roll folder in the Pictures Hub, you’ll need the PhotoCamera and MediaLibrary classes, respectively. Start by adding a reference to the Microsoft.Xna.Framework assembly. You don’t need to know XNA programming for this example; you do need types in this assembly, though, to access the media library.

At the top of the MainPage.xaml.cs file, add directives for the camera and media library:

using Microsoft.Devices;
using Microsoft.Xna.Framework.Media;

In the MainPage class, add the following class-level variables:

private int photoCounter = 0;
PhotoCamera cam;
MediaLibrary library = new MediaLibrary();

The camera can take a few seconds to initialize. By declaring the PhotoCamera object at the class level, you can create it when you navigate to the page and remove it from memory when you navigate away. We’ll use the OnNavigatedTo and OnNavigatingFrom methods for this purpose.

In the OnNavigatedTo method, create the camera object, register for the camera events that will be used, and set the camera preview as the source of the viewfinder, viewfinderBrush. Although common, cameras are optional in Windows Phone 7.5; it’s important to check for them before you create the camera object. If the primary camera isn’t available, the method writes a message to the user.

Add the methods shown in Figure 3 to the MainPage class.

Figure 3 The OnNavigatedTo and OnNavigatingFrom Methods

protected override void OnNavigatedTo
  (System.Windows.Navigation.NavigationEventArgs e)
{
  if (PhotoCamera.IsCameraTypeSupported(CameraType.Primary) == true)
  {
    cam = new PhotoCamera(CameraType.Primary);
    cam.CaptureImageAvailable +=
      new EventHandler<Microsoft.Devices.ContentReadyEventArgs>
        (cam_CaptureImageAvailable);
    viewfinderBrush.SetSource(cam);
  }
  else
  {
    txtMessage.Text = "A Camera is not available on this device.";
  }
}
protected override void OnNavigatingFrom
  (System.Windows.Navigation.NavigatingCancelEventArgs e)
{
  if (cam != null)
  {
    cam.Dispose();
  }
}

When navigating away from the page, use the OnNavigatingFrom method to dispose of the camera object and unregister any camera events. This helps minimize power consumption, expedite shutdown and release memory.

Capturing a Photo As shown in the XAML, when the user taps on the viewfinder, the viewfinder_Tapped method is called. This method initiates the image capture when the camera is ready. If the camera hasn’t initialized or is currently in the process of capturing another image, an exception will be thrown. To help mitigate exceptions, consider disabling the mechanisms that trigger photo capture until the Initialized event fires. To keep things simple in this example, we’ll skip that step.

Figure 4 shows the code you need to add to the MainPage class.

Figure 4 The viewfinder_Tapped Method

void viewfinder_Tapped(object sender, GestureEventArgs e)
{
  if (cam != null)
  {
    try
    {
      cam.CaptureImage();
    }
    catch (Exception ex)
    {
      this.Dispatcher.BeginInvoke(delegate()
      {
        txtMessage.Text = ex.Message;
      });
    }
  }
}

Capturing a photo and saving it are asynchronous endeavors. When the CaptureImage method is called, a chain of events initiates and control is passed back to the UI. As shown in the event sequence diagram in Figure 5, there are two stages to each image capture. First, the camera sensor captures the photo, and then images are created based on the sensor data.

The Image-Capture Event Sequence of the PhotoCamera Class
Figure 5 The Image-Capture Event Sequence of the PhotoCamera Class

Saving a Photo After the sensor captures the photo, two image files are created in parallel, a full-size image file and a thumbnail. You’re under no obligation to use both of them. Each is available as a JPG image stream from the e.ImageStream property in the arguments of the corresponding events.

The media library automatically creates its own thumbnails for display in the Pictures Hub of the device, so this example doesn’t need the thumbnail version of the image. However, if you want to display a thumbnail in your own app, the e.ImageStream from the Capture­ThumbnailAvailable event handler would be an efficient choice.

When the stream is available, you can use it to save the image to several locations. For example:

  • Camera Roll folder: Use the MediaLibrary.SavePictureToCameraRoll method.
  • Saved Pictures folder: Use the MediaLibary.Save­-Picture method.
  • Isolated Storage: Use the IsolatedStorageFile­-Stream.Write method.

In this example, we’ll save the image to the camera roll folder. For an example of how to save an image to Isolated Storage, see the Basic Camera Sample in the Windows Phone SDK. Add the code in Figure 6 to the MainPage class.

Figure 6 Saving an Image to the Camera Roll Folder

void cam_CaptureImageAvailable(object sender,
  Microsoft.Devices.ContentReadyEventArgs e)
{
  photoCounter++;
  string fileName = photoCounter + ".jpg";
  Deployment.Current.Dispatcher.BeginInvoke(delegate()
  {
    txtMessage.Text = "Captured image available, saving picture.";
  });
  library.SavePictureToCameraRoll(fileName, e.ImageStream);
  Deployment.Current.Dispatcher.BeginInvoke(delegate()
  {
    txtMessage.Text = "Picture has been saved to camera roll.";
  });
}

In the code in Figure 6, messages are sent to the UI before and after the image is saved to the Camera Roll folder. These messages are simply to help you understand what’s going on; they’re not required. The BeginInvoke method is needed to pass the message to the UI thread. If you didn’t use BeginInvoke, a cross-threading exception would be thrown. For brevity, this method lacks error-handling code.

Handling RotationWhen you save a picture to the media library, the correct orientation of the image will be noted in the file’s EXIF information. The main concern of your app is how the preview from the camera is oriented in the UI. To keep the preview appearing in the correct orientation, rotate the viewfinder (the VideoBrush) as applicable. Rotation is achieved by overriding the OnOrientationChanged virtual method. Add the code in Figure 7 to the MainPage class.

Figure 7 Overriding the OnOrientationChanged Virtual Method

void cam_CaptureImageAvailable(object sender,
  Microsoft.Devices.ContentReadyEventArgs e)
{
  photoCounter++;
  string fileName = photoCounter + ".jpg";
  Deployment.Current.Dispatcher.BeginInvoke(delegate()
  {
    txtMessage.Text = "Captured image available, saving picture.";
  });
  library.SavePictureToCameraRoll(fileName, e.ImageStream);
  Deployment.Current.Dispatcher.BeginInvoke(delegate()
  {
    txtMessage.Text = "Picture has been saved to camera roll.";
  });
}
protected override void OnOrientationChanged
  (OrientationChangedEventArgs e)
{
  if (cam != null)
  {
    Dispatcher.BeginInvoke(() =>
    {
      double rotation = cam.Orientation;
      switch (this.Orientation)
      {
        case PageOrientation.LandscapeLeft:
          rotation = cam.Orientation - 90;
          break;
        case PageOrientation.LandscapeRight:
          rotation = cam.Orientation + 90;
          break;
      }
        viewfinderTransform.Rotation = rotation;
    });
  }
  base.OnOrientationChanged(e);
}

Without any adjustment to the viewfinder orientation, the viewfinder for a typical primary camera will appear oriented correctly only when the hardware shutter button is pointing up (LandscapeLeft). If you rotate the device such that the hardware shutter button is pointing down (LandscapeRight), the viewfinder must be rotated 180 degrees to display correctly in the UI. The PhotoCamera Orientation property is used here in case the physical orientation of the primary camera is atypical.

Declaring Application Capabilities Finally, when your appli­cation uses a camera, you must declare that it does so in the application manifest file, WMAppManifest.xml. No matter which camera is used, you’ll need the ID_CAP_ISV_CAMERA capability. Optionally, you can use the ID_HW_FRONTCAMERA to designate that your app requires a front-facing camera:

<Capability Name="ID_CAP_ISV_CAMERA"/>
<Capability Name="ID_HW_FRONTCAMERA"/>

Your camera app won’t run without the ID_CAP_ISV_CAMERA capability. If you haven’t had a problem running it so far, it’s because this capability is added to new Windows Phone projects automatically. If you’re upgrading your app, though, you’ll need to add it manually. ID_HW_FRONTCAMERA must always be added manually, but its lack won’t prevent your app from running.

These capabilities help warn users who don’t have a camera on their device, but nothing stops them from downloading and purchasing your app. For that reason, it’s a good idea to make a trial version of your app available. Then, if users miss the warnings, they won’t spend money only to learn that your app won’t work as expected on their device. Your app ratings will thank you later.

If you haven’t done so yet, press F5 and debug this simple camera app on your device. You can debug the app on the emulator, but you’ll see only a black box moving around the screen because the emulator doesn’t have a physical camera. When debugging with a device, keep in mind that you can’t view your new images in the Picture Hub until you untether the device from your PC.

To go deeper, take a look at the Basic Camera Sample in the Windows Phone SDK. That sample demonstrates the full API for capturing photos: from adjusting flash and resolution settings to incorporating touch focus and the hardware shutter button.

Accessing the Camera Preview Buffer

In the previous example, the frames from the camera preview buffer were streamed to the viewfinder. The PhotoCamera class also exposes the current frame of the preview buffer to allow pixel-by-pixel manipulation of each frame. Let’s take a look at a sample from the Windows Phone SDK to see how we can manipulate frames from the preview buffer and display them on a writable bitmap in the UI.

The PhotoCamera class exposes the current frame of the preview buffer with the following “get preview” methods:

  • GetPreviewBufferArgb32: Integer array of the current frame in ARGB format
  • GetPreviewBufferYCbCr: Byte array of the current frame in YCbCr format
  • GetPreviewBufferY: Byte array of the luminance plane only, in a similar format

ARGB is the format used to describe color in Silverlight for Windows Phone applications. YCbCr enables efficient image processing, but Silverlight can’t use YCbCr. If you want to manipulate a YCbCr frame in your application, you have to convert the frame to ARGB before it can be displayed.

The Camera Grayscale Sample from the Windows Phone SDK (see Figure 8) demonstrates how to manipulate ARGB frames from the preview buffer and write them to a writable bitmap image in almost real time. In this sample, each frame is converted from color to grayscale. Note that the goal of this sample is to demonstrate ARGB manipulation; if your app needs only grayscale, consider using the GetPreviewBufferY method instead.

The Camera Grayscale Sample UI
Figure 8 The Camera Grayscale Sample UI

In the XAML file, an image tag is used to host the corresponding writable bitmap (the black-and-white image in the lower-left corner of the UI), like so:

<Image x:Name="MainImage"
       Width="320" Height="240"
       HorizontalAlignment="Left" VerticalAlignment="Bottom" 
       Margin="16,0,0,16"
       Stretch="Uniform"/>

When a button is pressed to enable the grayscale conversion, a new thread is created to perform the processing; a writable bitmap, having the same dimensions of the preview buffer, is created and assigned as the source of the Image control:

wb = new WriteableBitmap(
        (int)cam.PreviewResolution.Width,
        (int)cam.PreviewResolution.Height);
this.MainImage.Source = wb;

The thread performs its work in the PumpARGBFrames method. There, an integer array named ARGBPx is used to hold a snapshot of the current preview buffer. Each integer in the array represents one pixel of the frame, in ARGB format. This array is also created with the same dimensions as the preview buffer:

int[] ARGBPx = new int[
    (int)cam.PreviewResolution.Width *
    (int)cam.PreviewResolution.Height];

While the “grayscale” feature of the sample is enabled, the thread copies the current frame in the preview buffer to the ARGBPx array. Here, phCam is the camera object:

phCam.GetPreviewBufferArgb32(ARGBPx);

Once the buffer has been copied to the array, the thread loops through each pixel and converts it to grayscale (see the sample for more details about how that’s accomplished):

for (int i = 0; i < ARGBPx.Length; i++)
{
  ARGBPx[i] = ColorToGray(ARGBPx[i]);
}

Finally, before processing the next frame, the thread uses the BeginInvoke method to update the WriteableBitmap in the UI. The CopyTo method overwrites the WriteableBitmap pixels with the ARGBPx array, and the Invalidate method forces the WriteableBitmap to redraw, like so:

Deployment.Current.Dispatcher.BeginInvoke(delegate()
{
  // Copy to WriteableBitmap.
  ARGBPx.CopyTo(wb.Pixels, 0);
  wb.Invalidate();
  pauseFramesEvent.Set();
});

The WriteableBitmap class enables a wide range of creative possibilities. Now you can incorporate the camera preview buffer into your repertoire of visuals for the UI.

Recording Video

Although you can use the PhotoCamera class to stream the preview buffer to the UI, you can’t use it to record video. For that, you’ll need some classes from the System.Windows.Media namespace. In the final part of this article, we’ll look at the Video Recorder Sample from the Windows Phone SDK (see Figure 9) to see how to record video to an MP4 file in Isolated Storage. You can find this sample on the SDK code samples page.

The Video Recorder Sample UI
Figure 9 The Video Recorder Sample UI

The primary classes for video recording are:

  • CaptureDeviceConfiguration: Use to check availability of a video capture device
  • CaptureSource: Use to start and stop video recording/preview
  • VideoBrush: Use to fill Silverlight UI controls with a CaptureSource or PhotoCamera object
  • FileSink: Use to record video to Isolated Storage when a CaptureSource object is running

In the XAML file, a Rectangle control is used to display the camera viewfinder:

<Rectangle
  x:Name="viewfinderRectangle"
  Width="640"
  Height="480"
  HorizontalAlignment="Left"
  Canvas.Left="80"/>

A Rectangle control isn’t required to display video, however. You could use the Canvas control, as shown in the first example. The Rectangle control is used  simply to show another way to display video.

At the page level, the following variables are declared:

// Viewfinder for capturing video.
private VideoBrush videoRecorderBrush;
// Source and device for capturing video.
private CaptureSource captureSource;
private VideoCaptureDevice videoCaptureDevice;
// File details for storing the recording.       
private IsolatedStorageFileStream isoVideoFile;
private FileSink fileSink;
private string isoVideoFileName = "CameraMovie.mp4";

When a user navigates to the page, the InitializeVideoRecorder method starts the camera and sends the camera preview to the rectangle. After creating the captureSource and fileSink objects, the InitializeVideoRecorder method uses the static Capture­DeviceConfiguration object to find a video device. If no camera is available, videoCaptureDevice will be null:

videoCaptureDevice = CaptureDeviceConfiguration.GetDefaultVideoCaptureDevice();

In Windows Phone 7.5, cameras are optional. Although they’re common on today’s devices, it’s a best practice to check for them in your code. As Figure 10 shows, videoCaptureDevice is used to check for the presence of a camera. If one is available, captureSource is set as the source of a VideoBrush named videoRecorderBrush, and videoRecorderBrush is used as the fill for the Rectangle control named viewfinderRectangle. When the Start method of captureSource is called, the camera begins sending video to the rectangle.

Figure 10 Displaying the Video Preview

// Initialize the camera if it exists on the device.
if (videoCaptureDevice != null)
{
  // Create the VideoBrush for the viewfinder.
  videoRecorderBrush = new VideoBrush();
  videoRecorderBrush.SetSource(captureSource);
  // Display the viewfinder image on the rectangle.
  viewfinderRectangle.Fill = videoRecorderBrush;
  // Start video capture and display it on the viewfinder.
  captureSource.Start();
  // Set the button state and the message.
  UpdateUI(ButtonState.Initialized, "Tap record to start recording...");
}
else
{
  // Disable buttons when the camera is not supported by the device.
  UpdateUI(ButtonState.CameraNotSupported, "A camera is not supported on this device.");
}

In this example, a helper method named UpdateUI manages button states and writes messages to the user. See the Video Recorder Sample for more details.

Although the fileSink object has been created, no video is being recorded at this point. This state of the application is referred to as video “preview.” To record video, fileSink needs to be connected to captureSource before it’s started. In other words, before you can record video, you need to stop captureSource.

When the user taps the record button in the video recorder sample, the StartVideoRecorder method starts the transition from preview to recording. The first step in the transition is stopping captureSource and reconfiguring the fileSink:

// Connect fileSink to captureSource.
if (captureSource.VideoCaptureDevice != null
    && captureSource.State == CaptureState.Started)
{
  captureSource.Stop();
  // Connect the input and output of fileSink.
  fileSink.CaptureSource = captureSource;
  fileSink.IsolatedStorageFileName = isoVideoFileName;
}

Although the CaptureSource and VideoBrush classes might sound familiar if you’ve developed applications for the Silverlight plug-in, the FileSink class is all new. Exclusive to Windows Phone applications, the FileSink class knows all about writing to Isolated Storage; all you need to do is provide the name of the file.

After fileSink has been reconfigured, the StartVideoRecorder method restarts captureSource and updates the UI:

captureSource.Start();
// Set the button states and the message.
UpdateUI(ButtonState.Ready, "Ready to record.");

When the user stops recording, to transition from recording to preview, captureSource needs to be stopped again before the fileSink is reconfigured, as shown in Figure 11.

Figure 11 Transitioning from Recording to Preview

// Stop recording.
if (captureSource.VideoCaptureDevice != null
&& captureSource.State == CaptureState.Started)
{
  captureSource.Stop();
  // Disconnect fileSink.
  fileSink.CaptureSource = null;
  fileSink.IsolatedStorageFileName = null;
  // Set the button states and the message.
  UpdateUI(ButtonState.NoChange, "Preparing viewfinder...");
  StartVideoPreview();
}

The start-video-preview logic was isolated in another method to enable transition to preview from the video playback state (not covered in this article). Though I won’t cover playback here, it’s important to note that in Windows Phone, only one video stream can be running at a time.

The Video Recorder Sample features two separate video streams:

  1. captureSource g videoRecorderBrush g viewfinderRectangle (Rectangle control)
  2. isoVideoFile g VideoPlayer (MediaElement control)

Because only one stream can run at a time, this sample features a “dispose” method for each stream that can be called prior to the other stream running. In the DisposeVideoPlayer and Dispose­VideoRecorder methods, the stream is stopped by calling the Stop method on the respective object (and setting the source of MediaElement to null). The CaptureSource and MediaElement objects don’t actually implement the IDisposable interface.

At this point, you might be thinking that the Camera Grayscale Sample seemed to have two videos going at the same time. In reality, there was only one video stream in that application: the stream from the PhotoCamera object to the VideoBrush control. The grayscale “video” was actually just a bitmap that was redrawn at a high rate of speed, based on individually manipulated frames from the camera preview buffer.

Wrapping Up

The camera API, new for Windows Phone 7.5, opens the door for a new breed of applications that solve problems and entertain in ways not possible with earlier versions of the OS. This article touched on only a few aspects of the API. For the complete reference, see the Camera and Photos section in the Windows Phone SDK documentation at msdn.microsoft.com/library/windows/apps/hh202973(v=vs.105).aspx.


Matt Stroshane writes developer documentation for the Windows Phone team. His other contributions to MSDN Library feature products such as SQL Server, SQL Azure and Visual Studio. When he’s not writing, you might find him out on the streets of Seattle, training for his next marathon. Follow him on Twitter at twitter.com/mattstroshane.

Thanks to the following technical experts for reviewing this article: Eric BennettNikhil DeoreAdam Lydick and Jon Sheller