The North Face In-Store Explorer Proof-of-Concept: A White Paper


(revised for the February 2006 CTP)

Darren David, Fluid

Karsten Januszewski, Microsoft Corporation

March 2006
January 2007: Downloadable sample updated

Summary: See how Windows Presentation Foundation (formerly code-named "Avalon") was used to create an immersive experience that brings The North Face's brand and catalog to life in a retail environment. (28 printed pages)

Download the associated sample code, TNF_Samples.msi.


To successfully run the demo, you need to install the following:

  • Windows XP Service Pack 2 or Windows Vista February CTP (build 5308) on an x86 system
  • February 2005 WinFX Community Technology Preview
  • February 2006 WinFX SDK
  • Microsoft Visual Studio 2005 or Microsoft Visual C# Express 2005
  • Visual Studio 2005 Extensions for WinFX


Application Model
State Management
Image Montage
The Video Carousel


"Avalon enables the type of richly interactive brand experiences Fluid's online retail customers demand. As e-commerce evolves from simply performing transactions to offering compelling user experiences Avalon will be a critical component of that evolution."

-Tamir Scheinok, CEO, Fluid

Fluid, a pioneer in online retail customer experience worked with The North Face, a premiere outdoor products manufacturer offering the most technically advanced products on the market to accomplished climbers, mountaineers, extreme skiers and explorers, to develop a proof-of-concept retail kiosk using the Windows Presentation Foundation (WPF) platform (formerly code-named "Avalon"). The goal of the project was to show how WPF could be used to create an immersive experience that brings The North Face's brand and catalog to life in a retail environment. You can watch a short video from the PDC 2005 keynotes to see a demonstration of the proof-of-concept.

The final result was dubbed The North Face In-Store Explorer. Built by Fluid, it highlights several of the capabilities offered by WPF:

  • Hardware-accelerated rendering: Animated 3D meshes with video materials composited on an animated image montage; sub-pixel clear type text; 2D animation with vector-based shapes; and more. All of this is made possible by the underlying WPF engine, which is hardware accelerated and integrates these different media into a common experience.

  • Unified programming model for all media types: The proof-of-concept is a great example of how WPF provides a common programming model that integrates 2D vector shapes, 2D animation, images, 3D geometries, 3D animation, video and text into a single common platform.

  • 3D support: 3D is used in subtle, but powerful ways to enhance and user experience, employing this third dimension to provide intuitive user interface metaphors.

  • Full access to the .NET Framework: The proof-of-concept employs more than WPF: it uses features of the .NET platform, such as the XML deserialization capabilities, showing that WPF has all of the power of .NET at its disposal.


This article will walk through how the proof-of-concept was built, discussing design decisions and performance optimizing behaviors that were used in creating the proof-of-concept. The article will cover how the architecture of the application model as a whole was designed, in particular a deep discussion of the custom state manager built for the application. Second, it will discuss how the image montage was created. Last, it will delve into the creation of the video carousel, including some 3D tricks that were essential to making the application perform well. There are code samples associated with all three of these discussions that can be downloaded.

Application Model

One of the first decisions to make when writing a WPF application is how to architect transitions between different parts of the application. WPF provides a built in mechanism to handle these transitions, based on the very common metaphor of navigating between pages. For The North Face In-Store Explorer proof-of-concept, one option for the application would have been to create a System.Windows.NavigationApplication and using the navigation infrastructure to navigate between each screen. However, The North Face In-Store Explorer proof-of-concept had a requirement for dynamic transitions between each screen, such as elements of the application scaling and fading between screens. A NavigationApplication cannot support this scenario, so an alternative methodology was used that takes advantage of WPF's ability to composite multiple layers.

As such, the application does not use any of the navigation features of WPF and consists of a single window. The question then becomes one of creating a mechanism to transition between the different screens of the application within this single window. Each screen of the application is a Canvas. The North Face In-Store Explorer proof-of-concept always remains full screen with no option for the user to resize it, so it was decided that a Canvas would be used instead of a Grid. Using a Canvas doesn't offer any reflow, but does allow items to be absolutely positioned. Transitioning between screens becomes a matter of manipulating canvases in the single window.

There are two options for how to instantiate and manipulate the different canvas elements. The first option is to instantiate the canvases on an "as-needed" basis, inserting them into the visual tree when they are called for, and removing canvases when they are no longer needed. The second option is to instantiate all the canvases at the start of the application and then show/hide/manipulate them as needed. The second approach was chosen, both to allow for the dynamic transitions between screens and to achieve performance goals. Taking the performance hit of loading all the canvases when the application starts mitigates any performance cost for adding/removing canvases from the visual tree on the fly.

Because all of the canvases were loaded into a single window, the XAML and code for this single window had the potential to become unwieldy. To circumvent this, each screen was created as a control. These controls weren't created as separate dlls as there was no need to reuse them outside of this particular application. Each screen and the components of each screen were created as controls within the application itself. This same methodology is used in the sample project named "Architecture" that accompanies this article.

Wiring Up Controls In XAML

In the following XAML file, we will see how a set of controls is instantiated but not necessarily visible on the canvas. Look at Window1.xaml in the "Architecture" project:

<Window x:Class="Architecture.Window1"
  <Canvas Background="BurlyWood" Width="1024" Height="768" x:Name="MainCanvas">
    <ui:Screen3 x:Name="Screen3Canvas" Visibility="Collapsed"/>
    <ui:Screen2 x:Name="Screen2Canvas" Visibility="Collapsed" />
    <ui:Screen1 x:Name="Screen1Canvas" Visibility="Collapsed"/>
    <ui:Logo x:Name="LogoCanvas" Canvas.Left="{x:Static l:Constants.LOGOPANEL_POS_LEFT_OFFSCREEN}" Canvas.Top="300"/>

First, look at the attribute that declares xmlns:l="clr-namespace:Architecture". This mapping between a clr namespace and an xml namespace enables the ability to reference and instantiate classes in XAML from within the application itself. An additional mapping is necessary to enable classes from the Architecture.UI namespace to be instantiated.

Consider the class Logo. Within the project, there is a XAML file called Screen1.xaml with a root element of canvas and a code-behind class inside Screen1.xaml.cs that contains a partial class that derives from Canvas. To instantiate this class, we declare <ui:Screen1 />. Note that an x:Name must be provided (as opposed to just using the Name attribute) to the element if you later want to access this element via code, because the element exists in a different namespace than the default WPF namespace, so the XAML looks like this:

    <ui:Screen1 x:Name="Screen1Canvas" Visibility="Collapsed"/>

Techniques for "Hiding" User Interface Controls

While many UIElements are instantiated in window1.xaml, not all of them are visible immediately. Two techniques are used to "hide" the different screens of the application.

The first technique is to set the element's visibility property to collapsed, as is seen in the Screen1 XAML above. By setting the element's visibility to collapsed or hidden, the element can be made visible later. Because the element has already been instantiated and added to the visual tree, the performance cost for showing the control is nominal, as compared to either manually inserting it into the visual tree or navigating to it. It is worth pointing out that the Visibility property can have three states: Visible, Hidden and Collapsed. Visibility.Hidden means that the element will occupy layout space, but is invisible, whereas Visibility.Collapsed means the element takes no layout space. Also, by using Visibility.Collapsed, the system will not calculate the layout, thus less of a performance hit is taken.

The second technique to hide elements in the main window is to place them off screen. This is quite handy for animating elements onto the screen, giving the effect of elements sliding in from the left, right, top or bottom.

So, in the above XAML, the <ui:Logo /> has a Canvas.Left property set to

"{x:Static l:Constants.LOGOPANEL_POS_LEFT_OFFSCREEN}". The value of this constant is -300, which places it far left off the screen. Later, the Canvas.Left property is animated, which will give the effect of the canvas sliding onto the screen from the left.

As a side note, it is worth mentioning the use of static constants in XAML. There is a class in the project file named Constants.cs that looks something like this:

public static class Constants
   public const double LOGOPANEL_POS_LEFT_OFFSCREEN = -300;
   public const double LOGOPANEL_POS_LEFT_ONSCREEN = 50;
   public const double LOGOPANEL_POS_RIGHT_OFFSCREEN = 800;

This class is referenced in the XAML using the syntax {x:Static l:Constants.LOGOPANEL_POS_LEFT_OFFSCREEN}. Of course, the constant can be referenced in code as well, and is an example of the parity between declarative markup and code. Note that the xmlns:l is used here, which refers back to the other mapping processing instruction in Window1.xaml.

The Z Index

Finally, the "z-index" also affects the visibility of UI elements in a single window. This index is unrelated to the z-coordinate in 3D space; rather the "z-index" is WPF's way to track the relative order of elements that might share the same spatial coordinates. The WPF layout engine allows child elements of a grid or canvas to share the same coordinates. If multiple elements share the same coordinates and the elements are opaque, elements will be obscured by other elements, based on their "z-index". The "z-index" of elements in WPF is determined by their order in XAML or in code and determines how elements are laid out front to back, with the elements defined first backmost. Thus, looking at the above XAML, the last element in the XAML, Logo, is then effectively "on top" of the other elements in the window.

If for some reason, the "z-index" needs to be manipulated at runtime to manipulate their order, you can remove and insert that element to reassert its position.

State Management

One consequence of choosing a single-window approach (where all of the screens of the application reside in one window) is a greater need to keep track of the state of the application. State, in this case, means information such as which screen the user is currently viewing and which transitions are needed between different screens. Thus, the application needs to build in some type of state management infrastructure.

A state manager similar to the one used in The North Face In-Store Explorer proof-of-concept can be found in the Architecture code sample. The state manager for the application is encapsulated in a single class, for which the main window obtains a static instance. Within the statemanager class is an enumeration that represents all the different states of the application:

public enum ScreenStates

To manage these states, the class has a variable, CurrentState, which it uses to remember the current state. Additionally, the class has a Boolean variable, InTransition, to keep track of whether the state manager is in the process of transitioning. This is a semaphore used to stop the application from trying to switch states while it is in the middle of switching states, which would cause a mess of transitions to occur simultaneously.

The state manager also has member variables representing the main window as well as all of the different screens of the application, represented by controls, as discussed above. In the Loaded event of the main window, there is code to wire up each "live" control to the static state manager, so that the state manager can manipulate the screens and create the transitions between screens. Below is a portion of the code in the Loaded event:

private StateManager _StateManager;
private void WindowLoaded( object sender, EventArgs e )
   // . . .
   _StateManager = StateManager.GetInstance();
   _StateManager.MainWindow = this;
   _StateManager.Screen1 = Screen1Canvas;
   _StateManager.Screen2 = Screen2Canvas;
   // . . .


A Sample State Transition

Let's walk through an example of a state transition. If you compile and launch the Architecture executable, you can initiate state changes by clicking the left mouse button. At any time, you can click the right mouse button to reset the state back to the start of the application.

So, to initiate the transition, the application listens for a mouse click on the main window. Below is a portion of the OnLeftMouseClicked code as well as the SetState code:

private void OnLeftMouseClicked(object o, EventArgs e)
   switch (_StateManager.CurrentState)
      case (int)StateManager.ScreenStates.AppIntro:
   // . . .

public void SetState(ScreenStates state)

   if (_InTransition) return;

   switch (state)
      case ScreenStates.AppIntro:

   // . . .

The state manager is invoked by calling the SetState method. Consider what happens on the first click, which calls the AppIntro method. Let's look at that method:

private void AppIntro()
   _InTransition = true;
   Screen3.Visibility = Visibility.Collapsed;

First off, notice that the InTransition flag is set to true, which will make sure that no other transitions are attempted while this one is occurring. Then, Screen3 is collapsed. This is a bit of clean up in case we have reset the state. Next, an animation will be initiated on the logo itself. Remember, from within the state manager we can manipulate the different controls because we have instances of each control. In this case, the AnimateIn method on the Logo creates the animation that zooms the logo in from off screen. Lastly, the HandleStateChanged method within the state manager signals that the transition is complete

A More Interesting State Change

A more interesting state change, which involves both is the next one. This state change will cause the logo to animate off screen and then to make visible the Screen1 canvas that is currently collapsed. Because the desired effect is for the logo to complete its animation before the Screen1 canvas is made visible, the animation needs to set up a callback method when the animation completes.

private void Screen1Intro()
   _InTransition = true;
private void OnLogoOffScreen(object sender, EventArgs e)
   Clock clock = sender as Clock;
   if ( clock == null ) return;
   if (clock.CurrentState != ClockState.Active)
      _Screen1.Visibility = Visibility.Visible;

Look at the code in AnimateOut method of the logo class:

public void AnimateOut(EventHandler callback)
   DoubleAnimation da = new DoubleAnimation(Canvas.GetLeft(this), 
Constants.LOGOPANEL_POS_RIGHT_OFFSCREEN, TimeSpan.FromSeconds(2));
   da.BeginTime = null;
   AnimationClock ac = da.CreateClock();
   ac.CurrentStateInvalidated += new EventHandler(callback);
   this.ApplyAnimationClock(Canvas.LeftProperty, ac);

This method takes an EventHandler callback, which is the OnLogoOffScreen method within the state manager. Before the animation is started, the CurrentStateInvalidated event of the animation is wired up in code so that once the animation completes, the callback will occur, making the Visibility of Screen1 canvas visible.

This paper is not going to drill into all of the other specific transition code, but the methodology for all the transitions is consistent: the main window invokes the state manager SetState method, passing the transition it wants to invoke. Then, the state manager is able to invoke methods on the different controls to initialize them and/or animate them.

It is worth looking at the HandleStateChangeComplete method, which is called after each transition is complete.

private void HandleStateChangeComplete( KioskStates state )
   _InTransition = false;
   _CurrentState = ( int ) state;


First, this code sets the _InTransition flag back to false, so that new transitions can occur. Then, it sets the _CurrentState variable so that the application has knowledge of its current state.

The actual transitions in the code sample are relatively simple, but if you look at The North Face In-Store Explorer proof-of-concept, you will see some quite exciting state transitions that use this basic infrastructure to set up the transitions.

Image Montage

When the application first launches, a series of images is displayed in the background. Each image pans across the screen and fades out into the second image. This has been dubbed the "Ken Burns" effect, referring to Ken Burns, the documentary filmmaker who pioneered the use of moving and zooming on still images to make documentaries come alive. For The North Face, the effect adds a subtle and constant movement that makes the application "feel alive".

That effect is illustrated in another code sample accompanying this article called ImageMontage. This sample shows the basics of how the ImageMontage was built. The Image Montage is built using a single class, ImageMontageCanvas, deriving from System.Windows.Controls.Canvas, which contains the functionality to load the images, cycle through them and animate them. The images themselves are contained in a System.Collections.ObjectModel.ObservableCollection that contains all the images. The ObservableCollection class is a specialized collection class provided by WPF that has been optimized for usage of lists in WPF, especially in cases of data binding. It will raise change notifications that the data binding can pick up. See the section on data binding in "Optimizing Performance in Windows Presentation Foundation" whitepaper for more on the usage of ObservableCollection.

Populating the Image Montage with Images

To populate the ObservableCollection of images, there is a method called LoadImages in the constructor of the ImageMontageCanvas class.

In the LoadImage method, note the code for actually extracting an image from a file and setting it to the WPF System.Windows.Controls.Image class. The Source property of Image is System.Windows.Media.Imaging.BitmapImage. By passing a URI to the image in the constructor of BitmapImage, an image from the file system can be set.

public void LoadImages()
   DirectoryInfo dir = new DirectoryInfo(@"images");
   foreach (FileInfo f in dir.GetFiles("*.jpg"))
      Image newImage = new Image();
      newImage.Source = new BitmapImage(new Uri(f.FullName, UriKind.Relative)); ;

Once the images have been loaded, the Image Montage can begin. Let's take a look at a portion of that code. The ImageMontage doesn't have any XAML but is a WPF class created purely in code. It derives from Canvas. The rationale for using a canvas here is that no layout or resizing needs to occur – the only thing on the canvas is an image of fixed size.

Displaying and Fading the Images

While we will not go into every implementation detail of the image class, we will highlight the key methods in the class. The Init() method is a public method that gets called by the main window to begin the montage.

public void Init()
   DoFade(this.Images[_CurrentImageIndex], 0, 1);


The first thing that happens in Init() is that DisplayImage gets called, passing the first image in the collection. The DisplayImage() method sets the opacity of the image, its position on the canvas, and then adds the image to the visual tree.

protected void DisplayImage( Image img )
   img.Opacity = 0;
   Canvas.SetTop( img, 0 );
   Canvas.SetLeft( img, 0 );
   this.Children.Add( img );

After DisplayImage is called, the Init() method calls DoFade on the image, which starts an animation to fade in the opacity of the first image.

protected void DoFade(Image img, double startOpacity, double endOpacity)
   DoubleAnimation anim = new DoubleAnimation(startOpacity, endOpacity, TimeSpan.FromSeconds(3));
   AnimationClock ac = anim.CreateClock();
   ac.CurrentStateInvalidated +=
      delegate(object sender, EventArgs e)
         if (sender == null)
         Clock clock = sender as Clock;
         if (clock == null)
         if (clock.CurrentState == ClockState.Filling)
            if (img.Opacity > .1)
   img.BeginAnimation(Image.OpacityProperty, anim);

Let's walk through this animation code. Because the opacity of the image is being animated and opacity is of type double, a strongly typed animation, DoubleAnimation, is created. After a number of properties are set on the animation such as From, To, BeginTime and Duration, an AnimationClock is created from the animation. Then, the CurrentStateInvalidated event is wired up. We need to do this in order to remove the image from the ImageMontage canvas once it has faded to zero. Lastly, we can begin on the clock and apply the clock to the image itself.

Using the C# Anonymous Methods

The most interesting part of the code is the next line, where a specific method is wired up to the CurrentStateInvalidated event of the AnimationClock. We use one of the new C# features here, Anonymous Methods, to gain access to the image that we want to remove. If we only assigned a method to the CurrentStateInvalidated event, the application would receive as the sender object an instance of the AnimationClock associated with that animation, but it would not give us access to the image that was being animated. As such, we would be able to do things to manipulate the clock, but we would not have direct access to the image itself. For the purpose of this application, we need a reference to the image being animated because we want to remove the image from the canvas once its opacity has reached 0.

The technique used to pass the image to the event handler is to use the C# Anonymous Method feature (for more information, see Create Elegant Code with Anonymous Methods, Iterators, and Partial Classes by Juval Lowry). This allows us to get at the local parameters—both the image and the canvas itself—when we wire up the event handler.

Depending on what has happened to the opacity, we can either fire another event, letting the application know the fade is done, if the image is no longer visible (Opacity is less than .1) we can remove the image from the tree by calling Children.Remove on the ImageMontage itself.

This technique of using anonymous methods is an effective technique of getting at the element of an animation in the element's animation clock event handler and has potential applicability in a number of scenarios.

Panning and Scaling the Images

To achieve the effect of panning across the image and scaling it, the image's width is animated as its Canvas Top and Left position are animated:

protected void PanAndScale(Image img)
   double startW = 800;
   double endW = 1500;

   double startX = 0;
   double endX = -40;

   double startY = 0;
   double endY = -50;

   DoubleAnimation anim = new DoubleAnimation(startW, endW, TimeSpan.FromSeconds(13));
   img.BeginAnimation(Image.WidthProperty, anim);

   // Animate position to keep Image centered
   DoubleAnimation anim1 = new DoubleAnimation(startY, endY, TimeSpan.FromSeconds(13));
   img.BeginAnimation(Canvas.TopProperty, anim1);

   DoubleAnimation anim2 = new DoubleAnimation(startX, endX, TimeSpan.FromSeconds(13));
   img.BeginAnimation(Canvas.LeftProperty, anim2);

The values for the image's width along with the distance to move the X and Y of the canvas can be experimented with to change the look and feel of the panning and zooming.

Changing From Image to Image

The final method in the Init() method is Start(). This initiates a DispatcherTimer, which is invoked every 10 seconds, and is what causes the images to continuously loop.

public void Start()
   if (_ImageChangeTimer == null)
      _ImageChangeTimer = new DispatcherTimer(TimeSpan.FromSeconds(10),  //Time to wait
             DispatcherPriority.Background, // Priority
             new EventHandler(OnImageChangeTimer),  //Handler
             this.Dispatcher);  // Current dispatcher.


The final method to discuss at is the OnImageChangeTimer, which is the method that gets invoked every ten seconds. It is functionally the same as the Init() method: it method calls DoFade to fade out the current image, then calls DoFade to in the new image, and finally increments the _CurrentImageIndex by 1.

Voilà! There you have the nuts and bolts of the ImageMontage class.


In The North Face In-Store Explorer proof-of-concept, after the initial branding text zooms out towards the user, a carousel of panels emerges in 3D space, continually rotating, with videos mapped to each panel. The idea for the viewer is that users can select any of the videos that they would like to watch. The carousel turns out to be a compelling user interface metaphor that takes advantage of the 3D engine in WPF.

There are many different ways to implement a carousel in WPF. For The North Face In-Store Explorer proof-of-concept, the carousel was built using a control called ListBox3D, which derives from System.Windows.Controls.ListBox. At first, that might sound strange. After all, list boxes tend to look something like this:


But, with WPF, a ListBox is simply a user interface metaphor for a collection of selectable items. How that control is styled is entirely up to the application developer and designer. One might refer to this as the notion of Platonic controls, after Plato's philosophy – there is the abstract idea of the ListBox and then an infinite number of instances of a ListBox. In this case, the ListBox is a 3D control.

By deriving from Listbox, the ListBox3D control gets the various features of a ListBox such as adding items, removing items and selecting items. However, the look of the ListBox3D has nothing in common with its 2D base class. The ListBox3D is styled using a Viewport3D. Each item within the ListBox3D is thus a 3D geometry. A video is then used as a material on each item's geometry. When any one of the ListBox3D items is clicked, the selected event is raised, as one would expect with a generic ListBox.

To illustrate how this works, there is a sample included with this article called VideoCarousel. It implements the basic functionality seen in The North Face In-Store Explorer proof-of-concept.

The core of the carousel is composed of two classes. The first derives from System.Windows.Controls.ListBox, which is the new ListBox3D. The second derives from DispatcherObject and represents the individual items in the list.

public class List3DItem : DispatcherObject
public class List3D : ListBox 

Styling and Instantiating the ListBox3D

Before digging into the guts of these classes, the ListBox3D needs to be styled, as discussed above. The styling will be done in XAML as a resource. By providing a style that sets the Template property, we can take over the visual tree of the control. It is the control template that contains the visual tree of that control. A control template is intended to be a self-contained unit of implementation detail that is invisible to outside users. In this case, we create a single Viewport3D.

    <Style x:Key="Carousel3DStyle" TargetType="{x:Type l:ListBox3D}">
      <Setter Property="Template">
          <ControlTemplate TargetType="{x:Type l:ListBox3D}">
           <Viewport3D Focusable="true" ClipToBounds="true" >
                         FieldOfView="45" />


The remainder of the XAML for VideoCarousel is pretty simple. We instantiate a ListBox3D as well as a transparent grid that covers the entire canvas:

<Canvas Background="Black" Width="1024" Height="768" x:Name="MainCanvas" Loaded="WindowLoaded">
    <l:ListBox3D x:Name="Carousel"
    Canvas.Top="0" Canvas.Left="0" 
    Width="1024" Height="768"
    Style="{DynamicResource Carousel3DStyle}"  
    <Grid x:Name="CarouselMouseEventInterceptor" 
    Canvas.Top="0" Canvas.Left="0" 
    Width="1024" Height="768" 

Notice how the custom listbox uses the style that we declared in the resources section of the window. Also notice how the listbox wires up the ItemSelected event. However, the application will have to do some work to raise that event; we don't get hit testing for free with the custom listbox. In fact, this is where the grid beneath the listbox comes into play. Because it follows the listbox in the XAML declaration, the grid is "in front" of the listbox as far as "z-index" is concerned. The purpose of this grid is to intercept mouse clicks and pass them on to the 3D hit testing engine. Because a click in the viewport area may not hit one of the meshes, we only want to raise the ItemSelected event if in fact the user actually clicked on one of the meshes. We will explore how this works in a moment.

Let's now look at how the listbox gets built and activated. A series of things occur in order to get our carousel up and running:

  1. ListBox3D Initalization. The custom listbox gets instantiated and its constructor is called. The constructor fires off code that will add a top-level model group to the viewport as well as a camera and lights. It creates transforms on the entire modelgroup of the viewport.
  2. ListItem3D Initialization. ListItems are added to the listbox. In the constructor of the listitem, a mesh will be fetched that will be associated with that listitem. In addition, a video brush for each mesh will be created.
  3. ListItem3D Layout. The carousel will be built by positioning each mesh in the viewport equidistant from each other.
  4. ListBox3D Animation. The carousel animation will begin.
  5. ListItem Selection. How to capture the ListItem selected event.
  6. Addendum on Mesh Creation. A discussion on creating a mesh with disconnected triangle to allow for a material to be used on both sides of the mesh.

Let's drill into each of these steps.

ListBox Initialization

In the constructor of the ListBox3D, two events are wired up, Loaded and Initialized:

      public ListBox3D()
         this.Initialized += new EventHandler(OnInitialized);
         this.Loaded += new RoutedEventHandler(OnLoaded);

The reason for the two different events is because Initialized gets called before properties have been set on the ListBox3D. The Loaded event gets called after the properties are set. In the Initialized event, we create a top-level model group called MainGroup. The standard set of transforms (Scale, Rotation, Translation) is added to this top-level group. These transforms will allow us to later rotate the entire Model3DGroup. Then, a single white ambient light is added as a child to the _MainGroup. The _MainGroup itself will not contain any GeometryModel3Ds; it will contain a child ModelGroup3D, called _ModelItems, to which we will later add the meshes that are the set of ListItem3D objects containing the actual 3D geometry. Note that at this point none of these Model3DGroups have been added to the viewport. That will happen in the OnLoaded event of the viewport.

When the OnLoaded event is fired, we first need to manipulate the viewport so that we can add the ModelGroups as well as manipulate some other properties on the viewport. Getting at the viewport itself turns out to be trickier than one might expect. Because the viewport is nested inside the control template of the style, we don't have direct access to it from the ListBox3D itself. We actually have to find it by walking the visual tree of the ListBox3D, as demonstrated below in the FindViewport3D method:

        private FrameworkElement FindViewport3D(Visual parent)
            for (int i = 0; i < VisualTreeHelper.GetChildrenCount(parent); i++)
                Visual visual = VisualTreeHelper.GetChild(parent, i);

                if ((visual is FrameworkElement) && (visual is Viewport3D))
                    return (visual as FrameworkElement);
                    FrameworkElement result = FindViewport3D(visual);
                    if (result != null)
                        return result;

            return null;

This is a handy method that has potential reuse in a number of scenarios, whenever a situation rises where you need to extract a child element out of a control that has been styled. Once we have a reference to the viewport, we can add the _MainGroup ModelGroup3D.

All of this could have been done in XAML, but it was done in code. There is no difference between the two methods. Some may find the ability to see the hierarchical shape of the visual tree in XML more intuitive; others may find that actually instantiating the objects and manually adding them to collections more comfortable.

At this point, we now have a fully functional viewport, ready to go with lights, camera and a default set of transforms that can later be used to manipulate all of the models within it. Now we need some ListItem3Ds!

ListItem3D Initialization

We are now ready to add some ListItem3D objects to the ListBox3D. We do so by calling our own Add method on our custom ListBox3D, which in turn will create a new ListItem, set the VideoSrc property of the Listitem and call the protected AddChild method of the base ListBox class:

public void Add(string VideoSrc)
   ListBox3DItem expListItem = new ListBox3DItem();
   expListItem.VideoSrc = VideoSrc;

Here we see one of the reasons for deriving from ListBox: the management of the collection of ListItem3D objects is entirely handled by functionality inherent to the ListBox base class.

When the ListItem3D gets created, its constructor is called and we fetch the actual geometry for the ListItem3D itself, represented by the _ItemGroup Model3DGroup member variable. The _ItemGroup member variable is populated by the GetMainGroup() method, which instantiates a class that will generate a mesh for each item. Later in the paper, we will discuss how we created the 3D geometry. For now it is sufficient to understand that when each ListItem3D is instantiated, its constructor builds a mesh and associates it with the _ItemGroup property of ListItem3D.

At this point we have a set of ListItem3D objects in our ListBox3D. However, if you ran the code at this point, you wouldn't see anything, because we haven't done any work to actually add the meshes to the viewport; we have only added the ListItem3D objects to the ListBox3D. It is incumbent on us to actually add the meshes to the viewport, positioning them with a default set of translations, including scale, translation and rotation.

ListItem Layout

When the Build method on the ListBox3D class is called from the Window1.cs code, the geometries are actually added to the viewport. Additionally, the video materials are on their associated geometries.

Laying out the geometries in the viewport turns out to be slightly tricky mathematically, because the effect desired is for each ListItem3D to be positioned equidistant from each other, as well as from the center of the viewport (0,0,0). Additionally, each geometry needs to be rotated such that its front face will be toward the viewer when the model is rotated. The design for the ListBox3D layout calls for it to be dynamic, so that no matter how many items are added, the ListItem3D objects would be positioned so that they orbit correctly.

Inspecting the code in the Build method will reveal how this is accomplished. Basically, it is achieved by first dividing the total number of ListItem3D items by 360, which is then the angle offset for each item. So, if there are four items in the carousel, the offset is 90 degrees. With this offset angle in mind, the collection of items is run through a loop, setting each rotation angle based on this offset, incrementing the total angle.

The translation vector for each item is also generated based on incrementing the offset angle, using the algorithm in the following method:

private Vector3D GetTranslationOffsetForCarouselAngle(double angle)
   double radian = Math.PI * angle / 180.0;
   double x = _Radius * Math.Cos(radian);
   double z = _Radius * Math.Sin(radian);
   return new Vector3D(x, 0, z);

Once these transforms have been established, we can add them to the ListItem3D Model3DGroup by calling its SetDefaultPosition method and passing the scale, translate and rotate vectors. Then, we actually add the Model3DGroup to the viewport's _ModelItems collection.

The final step in the build code is to call each ListBox3D object's Initialize method, which will paint the mesh with the video as follows:

if (this.VideoSrc != "")
    if (_FrontVideoDrawing.Clock == null)
        //because our MediaElement is instantiated in code, we need to set its loaded and unloaded behavior to be manual
        _FrontVideoDrawing.LoadedBehavior = MediaState.Manual;
        _FrontVideoDrawing.UnloadedBehavior = MediaState.Manual;
        MediaTimeline mt = new MediaTimeline(new Uri(@"media\" + (String)this.VideoSrc, UriKind.Relative));
        //mt.RepeatBehavior = RepeatBehavior.Forever;
        //there are issues w/ RepeatBehavior in the Feb CTP so instead we
        //wire up the current state invalidated event to get repeat behavior
        mt.CurrentStateInvalidated += new EventHandler(mt_CurrentStateInvalidated);
        MediaClock mc = mt.CreateClock();
        _FrontVideoDrawing.Clock = mc;
        _FrontVideoDrawing.Width = 5;
        _FrontVideoDrawing.Height = 10;
        VisualBrush db = new VisualBrush(_FrontVideoDrawing);
        Brush br = db as Brush;
        MaterialGroup mg = new MaterialGroup();
        mg.Children.Add(new DiffuseMaterial(br));
        GeometryModel3D gm3dFront = (GeometryModel3D)_ItemGroup.Children[0];
        //only need to paint it one place to show up two places!
        gm3dFront.Material = mg;


To add video to a WPF application, we need to create a MediaTimeline, passing the location of the video to the timeline. From the timeline, we will create a media clock, which we can use control the timeline. We will set the repeat behavior on the timeline to loop infinitely, so that each video will restart automatically when it reaches its end. We will associate the clock with a MediaElement and then paint the video onto the mesh by creating a VisualBrush that uses the MediaElement and then using that brush in a diffuse material. Notice that we haven’t started playing the video yet, but we have simply got everything primed to go. The videos will be started later when code inside the ListBox3D calls PlayVideo().

Because we only painted one side of the mesh, how is it that the video shows up on both sides? This is a performance-optimizing trick we have accomplished with our generated mesh, which we will explain later.

ListBox Animation

Now that our meshes are correctly positioned and the video is painted on them, we can actually begin rotating the models and start playing the videos, which happens in the Activate method of the ListBox3D. This method starts each video playing and then calls the StartAutoRotation method, which does two things:

First, it initializes the code to adjust the volume. If all the videos played sound concurrently, a cacophony of garbled audio would result. As such, we need application logic to manipulate the volume for each video depending on where it is in the carousel. A DispatcherTimer is set up, correlated to the amount of time each video will be in front of the user:

_VolumeAdjustTimer = new DispatcherTimer(TimeSpan.FromMilliseconds(ADJUST_VOLUME_INTERVAL),
 //Time to wait
          DispatcherPriority.Background, // Priority
          new EventHandler(this.OnVolumeAdjustTimer), //Handler
          this.Dispatcher);  // Current dispatcher.

In the callback for the timer, OnVolumeAdjustTimer, the volume is set for each video, depending if it is in front or not. Because the entire carousel is dynamic, there is an algorithm used to determine which video is currently in front of the user, which is used to set the volume accordingly, which can be explored in the OnVolumeAdjustTimer method.

Second, the StartAutoRotation method initiates the animation of the carousel. The carousel has two main types of animation. One is when a video is showcased, which is a slower animation in front of the viewer. The other animation is when the carousel is winding, which is a faster animation. The application decides which animation to perform based on a RotationState enumeration:

if (_AutoRotationState != (int)ExpeditionCarousel3DRotationStates.ActivationRotate)

Each of these methods is similar, in that they calculate the angle of rotation, call the RotateModel method, and then set a new RotationState. Perhaps more interesting to investigate is the RotateModel method, which is where the actual animation is started:

private void RotateModel(double start, double end, int duration)
   RotateTransform3D rt3D = _GroupRotateTransformY.GetCurrentValue();
   Rotation3D r3d = rt3D.Rotation;
   DoubleAnimation anim = new DoubleAnimation();
   anim.From = start;
   anim.To = end;
   anim.BeginTime = null;
   anim.AccelerationRatio = 0.1;
   anim.DecelerationRatio = 0.6;
   anim.Duration = new TimeSpan(0, 0, 0, 0, duration);
   AnimationClock ac = anim.CreateClock();
   ac.CurrentStateInvalidated += new EventHandler(OnRotateEnded);
   r3d.ApplyAnimationClock(Rotation3D.AngleProperty, ac);
public void OnRotateEnded(object sender, EventArgs args)
   if (sender == null)
   Clock clock = sender as Clock;
   if (clock == null)
   if (clock.CurrentState == ClockState.Filling)
      if (this.IsAutoRotating)
         if (this.AutoRotationState == (int)ExpeditionCarousel3DRotationStates.ShowcaseRotate)
      clock.CurrentStateInvalidated -= new EventHandler(this.OnRotateEnded);
      clock = null;

The rotation of the carousel itself is achieved here by creating a DoubleAnimation animation that will be applied to the AngleProperty of the Rotation3D object of the RotationTransform of the Y-axis. The other interesting thing to note here is the mechanism used to continue rotating the model. The CurrentStateInvalidated event is wired up when the animation is created so that when the animation ends an event is fired allowing us to start a new animation. This pattern continues for as long as the application runs.

ListItem Selection

In The North Face In-Store Explorer proof-of-concept, different animations are initiated when the ListItem3D objects are selected. While that code is not part of the Video Carousel code sample, the code sample is ready to handle the ItemSelected event. If you run the code in a debugger and set a breakpoint on the OnList3DItemSelected method, you will see this in action.

It is incumbent on the application itself to raise this event and it does that by responding to a mouse click on the transparent grid overlaying the viewport. It is the grid that wired up the PreviewMouseLeftButtonUp event, which passes the event on to the ListBox3D's OnPreviewLeftClick method, which contains the following code:

public void OnPreviewLeftClick(object sender, MouseButtonEventArgs e)
   Point p = e.GetPosition(this);

   DoHitTest(_MainViewport3D, p);
   //here we get back the selected listitem and can do with it what we will
   if ((_ciHitTest != null))
      ListBox3DItem[] ListBox3DItemList = { _ciHitTest };
      ListBox3DItem[] ListBox3DItemListRemoved = { };
      //we also raise the OnSelectionChanged event for anyone listening   
      OnSelectionChanged(new SelectionChangedEventArgs(ListBox3D.SelectionChangedEvent, 
ListBox3DItemListRemoved, ListBox3DItemList));

As is evident, the mouse position is extracted from the event arguments and passed to a hit testing method. Ultimately, if a hit is detected, the protected OnSelectionChanged event is raised from the ListBox base class, which can be handled in the application code.

Addendum on Mesh Creation

The final technique to discuss regarding the Video Carousel is how the meshes were created. Initially, the geometry was created using a 3D modeling tool. These geometries were then exported as .obj files and imported into the Expression Interactive Designer tool. The resultant XAML was then extracted and used in The North Face In-Store Explorer proof-of-concept. This workflow was successful, but it had a major limitation. The geometry that was exported was composed of six meshes: one for the front side, one for the backside, and four around the sides.

To understand why this was not an ideal solution, it's important to understand how WPF3D's 3D models are displayed. Every 3D mesh that is displayed in WPF has an associated Material, and a GeometryModel3D uses the combination of these two. In general, if a designer has to choose between more GeometryModel3Ds with fewer polygons each, or fewer GeometryModel3Ds with more polygons each, they should choose the latter. As an example, imagine that a designer wants to construct a cube with each face painted by the same Material. The cube consists of 12 triangles, so the designer could conceivably create 12 GeometryModel3Ds, each containing a mesh with only a single triangle. Alternatively, the designer could create a single GeometryModel3D with a single mesh with all 12 triangles. The latter can be substantially more efficient.

For as the Video Carousel, the design of the proof-of-concept called for the ability to see the video playing on both sides of the geometry. Because the original geometry was composed of a different mesh for the front backsides of the geometry, two distinct video materials had to be used. If only one was used, the video would be obscured when the item was furthest back in the carousel. However, painting two videos on each mesh proved to have a substantial performance penalty, just like creating a cube from several GeometryModel3Ds instead of one.

The solution to the problem was to merge both the front-face mesh and back-face mesh into a single mesh, requiring a single material, and residing in a single GeometryModel3D. This is possible in Avalon because there is no requirement that the triangles of a mesh are in any way connected. A mesh is simply a collection of triangles, and these triangles can be completely dissociated from each other. All triangles in a mesh are painted with the same material, and all are subject to the same transforms, but they needn't be connected. This enables the ability to generate a single mesh that actually consists of two planes. In the case of the Video Carousel sample, the two planes are offset from one another in a purely parallel fashion; in The North Face In-Store Explorer proof-of-concept, the planes were actually curved, offset from one another to create a slice of a spherical shell.

Since back-face culling is always enabled on current builds of WPF, one must remember that the back faces of a mesh will be wound opposite those of the front face. Also, to create the impression that the back face of a material is really the same as the front face, the horizontal texture components should be reversed so that the back has a "mirror image" of the front.

So the Model3DGroup used in the Video Carousel demo is as follows:

   <Model3DGroup x:Key="ListItem3DModel3DGroup"  >
        <!--Single Mesh With Both Front And Back Face
        With Reversed Texture Coordinates and Opposite
        Winding Order on the Back Face-->
            Positions="-0.5,0.5,0.125 -0.5,-0.5,0.125 0.5,-0.5,0.125 
0.5,0.5,0.125 0.5,0.5,-0.125 0.5,-0.5,-0.125 -0.5,-0.5,-0.125 -0.5,0.5,-0.125" 
            TextureCoordinates="0,0 0,1 1,1 1,0 0,0 0,1 1,1 1,0" 
            TriangleIndices="0 1 2 2 3 0 4 5 6 6 7 4" />
            <DiffuseMaterial Brush="#48565E" />
            <!--Single Mesh That creates the four sides-->
              Positions="0.5,0.5,0.125 0.5,-0.5,0.125 0.5,-0.5,-0.125 
0.5,0.5,-0.125 -0.5,0.5,-0.125 -0.5,-0.5,-0.125 -0.5,-0.5,0.125 -
0.5,0.5,0.125 -0.5,0.5,-0.125 -0.5,0.5,0.125 0.5,0.5,0.125 0.5,0.5,-
0.125 -0.5,-0.5,0.125 -0.5,-0.5,-0.125 0.5,-0.5,-0.125 0.5,-0.5,0.125" 
              TextureCoordinates="0,0 0,1 1,1 1,0 0,0 0,1 1,1 1,0 0,0 
0,1 1,1 1,0 0,0 0,1 1,1 1,0" 
              TriangleIndices="0 1 2 2 3 0 4 5 6 6 7 4 8 9 10 10 11 8 
12 13 14 14 15 12" />
            <DiffuseMaterial Brush="#48565E" />

If you look closely at the first MeshGeometry3D, you will notice that the first four points in the positions collection have a Z coordinate of 1.25 and the last four have a Z coordinate of -1.25. Effectively, the first four coordinates create the front face.


The last four create the back face:


The reverse winding occurs in the TriangleIndices collection. The first two triangles composed of positions 0 - 3 are wound counterclockwise, while the last two triangles composed of positions 4 – 7 are wound clockwise

(It should be noted that the performance characteristics of this prerelease version of WPF are subject to substantial change before release, and that the optimizations made here may not be relevant in the final release.)


A shout-out to the following individuals who helped bring this proof-of-concept to fruition:

David Teitlebaum, Tom Taylor, Tom Mulcahy, Matt Caulkins, Robert Hogue