October 2014

Volume 29 Number 10

Unity : Developing Your First Game with Unity and C#, Part 3

Adam Tuliper | October 2014

You’re still with me in this series. Good. In the first article, I covered some Unity basics (msdn.microsoft.com/magazine/dn759441). In the second, I focused on 2D in Unity (msdn.microsoft.com/magazine/dn781360). Now I get to my favorite part of game development—3D. The world of 3D is a truly magical place—amazing immersive environments, rich sound effects and beautiful visuals—even just a simple puzzle game with real-world physics can keep you hooked for hours.

3D games definitely add a layer of complexity over 2D, but by taking it piece by piece you can build up a cool 3D game. New project settings for both 2D and 3D in Unity support 3D. You can have 3D objects in a 2D game (and vice versa).

What Makes Up a 3D Scene?

3D scenes consist primarily of three main visual components—lights, mesh renderers and shaders. A light is, well, a light, and Unity supports four different types. You can find them all under the GameObject menu. Experiment with adding the various types and changing their properties. The easiest one to light up your scene is a directional light, which is like the sun in the sky.

A mesh (or model) is a collection of vertices that make up the polygons that make up an object. A shader is a compiled routine that contains code to control how your object will show or interact with light. Some shaders simply take light and reflect it like a mirror; others take a texture (an image to be applied to your mesh) and can enable shadows and depth; and some even allow you to cut visual holes through your models, like a fence.

Models are typically FBX or OBJ files exported from another modelling software package. FBX files can also contain animation data, so you might receive one FBX file for your model and one containing several animations. Several third-party file formats are also supported, such as the Autodesk Maya .ma format and Blender files. You will typically need the third-party program installed on the same system if you want Unity to import these files, and then it’s simply a matter of dragging and dropping them into your Unity project, just as you would any other file. Behind the scenes, Unity will convert other file formats (upon import or detecting file changes) into the FBX file format.

Asset Store

I touched on the Asset Store in my first article, but in 3D games is where it’s really handy. I’m not an artist, and because this is a technical magazine, I assume most of you aren’t, either. (If you are, please accept my congrats, you are part of a rare group.) But if I want to create a game with lush environments and old destroyed buildings, for example, it’s not a problem. I can buy what I need from the Asset Store. If I want 15 different zombies, I can procure a pack from Mixamo in the Asset Store. The potential combinations are nearly endless, so don’t worry about someone else’s game looking like yours. Best of all, the Asset Store integrates into Unity. You can upgrade your packages by clicking Window | Asset Store and then the bin icon. You can also check out reviews and comments to more easily determine if a particular item is good for your project, for example, whether its mobile-optimized or not. Desktop games can typically handle a lot more objects/vertices/textures/memory than a mobile game, although some of the newer chips make mobile devices today seem like Xbox 360s.  

In a typical 3D game, many of the same concepts from a 2D game apply—colliders, triggers, rigid bodies, game objects/transforms, components and more. Regardless of the type of 3D game, you’ll typically want to control input, movement, and characters; use animations and particle effects; and build an imaginative world that’s both fantastical and realistic. I’ll discuss some of the ways Unity helps with this.

Input, Movement and Character Controllers

Reading input for movement becomes a bit more complicated in 3D because rather than simply moving in the X and Y planes, you can now move in three dimensions: X, Y and Z. Scenarios for 3D movement include (but aren’t limited to) top-down movement, where a character moves only horizontally and vertically; rotating a camera or character when reading mouse input, as is done in many first-person shooter (FPS) games; strafing left to right when reading horizontal input; rotating to turn around when reading horizontal input; or just walking backward. There are a good number of movement options from which to choose.

When moving an object, you don’t give it a position to move to, as you might expect. Remember, you’re executing code with each frame, so you need to move the object in small increments. You can either let the physics engine handle this by adding a force to your rigidbody to move it, or you can tween the object. Tweening basically means transitioning between values; that is, moving from point A to point B. There are various ways to tween values in Unity, including free third-party libraries such as iTween. Figure 1 shows some manual ways to move an object in Unity. Note that for simplicity, they haven’t been optimized (to do so, I’d hold a reference to the transform in a variable to prevent going from managed code to native code too often).

Figure 1 Various Methods for Moving Objects

// Method 1
void Update()
  // Move from point a to point b by .2 each frame - assuming called in Update.
  // Will not overshoot the destination, so .2 is the max amount moved.
  transform.position =
    Vector3.MoveTowards(transform.position, new Vector3(10, 1, 100), .2f);
// Method 2
void Update()
  // Interpolate from point a to point b by a percentage each frame,
  // in this case 10 percent (.1 float).
  var targetPosition = new Vector3(10,0,15);
  transform.position = Vector3.Lerp(parentRig.position, targetPosition, .1f);
// Method 3
void Update()
  // Teleport the object forward in the direction it is rotated.
  // If you rotate the object 90 degrees, it will now move forward in the direction
  // it is now facing. This essentially translates local coordinates to 
  // world coordinates to move object in direction and distance
  // specified by vector. See the Unity Coordinate Systems section in the 
  // main article.
  transform.Translate(Vector3.forward * Time.deltaTime);
// Method 4
void FixedUpdate()
  // Cause the object to act like it's being pushed to the
  // right (positive x axis). You can also use (Vector.right * someForce)
  // instead of new Vector().
  rigidbody.AddForce( new Vector3(7, 0, 0), ForceMode.Force);
// Method 5
void FixedUpdate()
  // Cause the object to act like it's being pushed to the positive
  // x axis (world coordinates) at a speed of approx 7 meters per second.
  // The object will slow down due to friction.
  rigidbody.velocity = new Vector3(7,0,0);
// Method 6
// Move the rigidbody's position (note this is not via the transform).
// This method will push other objects out of the way and move to the right in
// world space ~three units per second.
private Vector3 speed = new Vector3(3, 0, 0);
void FixedUpdate()
  rigidbody.MovePosition(rigidbody.position + speed * Time.deltaTime);
// Method 7
void FixedUpdate()
  // Vector3.forward is 0,0,1. You could move a character toward 0,0,1, but you
  // actually want to move the object forward no matter its rotation.
  // This is used when you want a character to move in the direction it's
  // facing, no matter its rotation. You need to convert the meaning of
  // this vector from local space (0,0,1) to world space,
  // and for that you can use TransformDirection and assign that vector
  // to its velocity.
  rigidbody.velocity = transform.TransformDirection(Vector3.forward * speed);

Each approach has advantages and disadvantages. There can be a performance hit moving just the transform (methods 1-2), though it’s a very easy way to do movement. Unity assumes if an object doesn’t have a rigidbody component on it, it probably isn’t a moving object. It builds a static collision matrix internally to know where objects are, which enhances performance. When you move objects by moving the transform, this matrix has to be recalculated, which causes a performance hit. For simple games, you may never notice the hit and it may be the easiest thing for you to do, although as your games get more complicated, it’s important to move the rigidbody itself, as I did in methods 4-6.

Rotating Objects

Rotating an object is fairly simple, much like moving an object,except the vectors now represent degrees instead of a position or a normalized vector. A normalized vector is simply a vector with a max value of one for any value and can be used when you just want to simply reference a direction by using a vector. There are some vector keywords available to help, such as Vector3.right, back, forward, down, up, left, right, zero and one. Anything that will move or rotate in the positive horizontal direction can use Vector.right, which is just a shortcut for (1,0,0), or one unit to the right. For rotating an object, this would represent one degree. In Figure 2, I just rotate an object by a little bit in each frame.

Figure 2 Methods for Rotating an Object

// Any code below that uses _player assumes you
// have this code prior to it to cache a reference to it.
private GameObject _player;
void Start()
  _player = GameObject.FindGameObjectWithTag("Player");
// Method 1
void Update () {
  // Every frame rotate around the X axis by 1 degree a
  // second (Vector3.right = (1,0,0)).
  transform.Rotate(Vector3.right * Time.deltaTime);
// Method 2
void Update () {
  // No matter where the player goes, rotate toward him, like a gun
  // turret following a target.
// Method 3
void Update()
  Vector3 relativePos = _player.transform.position - transform.position;
  // If you set rotation directly, you need to do it via a Quaternion.
  transform.rotation = Quaternion.LookRotation(relativePos);

Each of these techniques has minor nuances. Which one should you use? I would try to apply forces to the rigidbody, if possible. I’ve probably just confused you a bit with that option. The good news is, there’s existing code that can do virtually all of this for you.

Did you notice the Quaternion in Method 3? Unity uses Quaternions internally to represent all rotations. Quaternions are efficient structures that prevent an effect called gimbal lock, which can happen if you use regular Euler angles for rotation. Gimbal lock occurs when two axes are rotated to be on the same plane and then can’t be separated. (The video at bit.ly/1mKgdFI provides a good explanation.) To avoid this problem, Unity uses Quaternions rather than Euler angles, although you can specify Euler angles in the Unity Editor and it will do the conversion into a Quaternion on the back end. Many people never experience gimbal lock, but I wanted to point out that if you want to set a rotation directly in code, you must do it via a Quaternion, and you can convert from Euler angles using Quaternion.Euler.

Now that you’ve seen many options, I should note that I find the easiest method is to use a rigidbody and simply apply .AddForce to the character. I prefer to reuse code when I can, and luckily Unity supplies a number of prefabs.

Let’s Not Reinvent the Wheel

Unity provides the Sample Assets package in the Asset Store (bit.ly/1twX0Kr), which contains a cross-platform input manager with mobile joystick controls, some animations and particles, and most important, some prebuilt character controllers.

There are some older assets included with Unity (as of this writing, version 4.6). Those assets are now distributed as a separate package that Unity can update separately. Rather than having to write all of the code to create a first-person character in your game, a third-person character, or even a self-driving car, you can simply use the prefabs from the sample assets. Drag and drop into your scene and instantly you have a third person view with multiple animations and full access to the source code, as shown in Figure 3.

A Third-Person Prefab
Figure 3 A Third-Person Prefab


An entire book could be dedicated (and has) to the Mecanim animation system in Unity. Animations in 3D are generally more complicated than in 2D. In 2D, an animation file typically changes a sprite renderer in each key frame to give the appearance of animation. In 3D, the animation data is a lot more complex. Recall from my second article that animation files contain key frames. In 3D, there can be many key frames, each with many data points for changing a finger, moving an arm or a leg, or for performing any number and type of movements. Meshes can also have defined bones in them and can use components called skinned mesh renderers, which deform the mesh based on how the bones move, much as a living creature would.

Animation files are usually created in a third-party modeling/animation system, although you can create them in Unity, as well.

The basic pose for a character in a 3D animation system is the T-pose, which is just what it sounds like—the character standing straight with outstretched arms, and it applies to just about any humanoid-shape model. You can then enliven that basic character by having Mecanim assign virtually any animation file to it. You can have a zombie, elf and human all dancing the same way. You can mix and match the animation files however you see fit and assign them via states much as you would in 2D. To do this, you use an animation controller like the one shown in Figure 4.

Animation Controller for Controlling a Character’s Animation States
Figure 4 Animation Controller for Controlling a Character’s Animation States

Remember, you can get characters and animations from the Unity Asset Store; you can create them with modeling tools; and there are third-party products like Mixamo’s Fuse that enable you to quickly generate your own customized characters. Check out my Channel 9 videos for an intro to animation in Unity.

Creating a World

Unity has a built-in terrain system for generating a world. You can create a terrain and then use the included terrain tools to sculpt your terrain, make mountains, place trees and grass, paint textures, and more. You can add a sky to your world by importing the skybox package (Assets | Import Package | Skyboxes) and assigning it in Edit | Render Settings | Skybox Material. It took me just a couple of minutes to create a terrain with reflective, dynamic water, trees, sand, mountains and grass, as shown in Figure 5.

A Quickly Created Terrain
Figure 5 A Quickly Created Terrain

Unity Coordinate Systems

Unity has four different methods for referring to a point in a game or on the screen as shown in Figure 6. There’s screen space, which ranges from 0 to the number of pixels and is used typically to get the location on the screen where the user touches or clicks. The viewport space is simply a value from 0 to 1, which makes it easy to say, for example, that halfway is .5, rather than having to divide pixels by 2. So I can easily place an object in the middle of the screen by using (.5, .5) as its position. World space refers to the absolute positioning of an object in a game based on three coordinates, (0, 0, 0). All top-level game objects in a scene have their coordinates listed in world space. Finally, local space is always relative to the parent game object. With a top-level game object, this is the same as world space. All child game objects are listed in the Editor in coordinates relative to their parent, so a model in your app of a house, for example, may have world coordinates of (200, 0, 35), while its front door (assuming it’s a child game object of the house) might be only (1.5, 0, 0), as that’s relative to the parent. In code, when you reference transform.position, it’s always in world coordinates, even if it’s a child object. In the example, the door would be (201.5, 0, 35), but if you instead reference transform.localPosition, you’d return (1.5, 0, 0). Unity has functions for converting among the various coordinate systems.

Coordinates in Unity
Figure 6 Coordinates in Unity

In the prior move examples I mostly moved using world space, but in some cases used local space. Refer back to method 7 in Figure 1. In that example I take a local normalized (or unit) vector of Vector.forward, which is (0,0,1). This by itself doesn’t have much meaning. However, it shows intent to move something on the Z axis, which is forward. What if the object is rotated 90 degrees from (0,0,0)? Forward can now have two meanings. It can mean the original absolute Z axis (in world coordinates), or a Z axis relative to the rotated object, which is always pointing forward for the object. If I want an object to always move forward no matter its rotation, I can simply translate between local forward to the real-world forward vector by using transform.TransformDirection(Vector3.forward * speed) as is shown in that example.

Threading and Coroutines

Unity uses a coroutine system to manage its threads. If you want something to happen in what you think should be a different thread, you kick off a coroutine rather than creating a new thread. Unity manages it all behind the scenes. What happens is the coroutine pauses when it hits the yield method. In the example in Figure 7, an attack animation is played, paused for a random length and then played in attack again.

Figure 7 Using a Coroutine to Pause Action

void Start()
  // Kick off a separate routine that acts like a separate thread.
IEnumerator Attack()
  // Trigger an attack animation.
  // Wait for .5 to 4 seconds before playing attacking animation, repeat.
  float randomTime = Random.Range(.5f, 4f);
  yield return new WaitForSeconds(randomTime);

Physics and Collision Detection

Physics and collision detection features in 3D are nearly the same as in 2D, except the colliders are shaped differently and the rigidbody component has a few different properties, such as being able to accomplish free rotations or movement in the X, Y and Z axes. In 3D there’s now a mesh collider that wraps the entire shape of a model as a collision-detection zone. This might sound great, and for collisions it’s pretty good, but it’s not good for performance. Ideally, you want to simplify collider shapes and limit the processing power it takes to use them. Have a zombie? No problem, use a capsule collider. A complex object? Use multiple colliders. Avoid the mesh collider if possible.

Unity provides a number of methods to know when a collision happens or a trigger is triggered. The following shows just a basic example:

void OnCollisionEnter(Collision collision)
  // Called when you have a physical collision.
  Debug.Log("Collided with " + collision.gameObject.name);
void OnTriggerEnter(Collider collider)
  // Called when another object comes within the trigger zone.
  Debug.Log("Triggered by " + collider.gameObject.name);

There are many more methods than listed here, such as OnTriggerExit and OnCollisionExit and they’re almost identical to their 2D counterparts.

Object Creation

When you want to create new GameObject-based items at run time, you don’t use constructors. Instead, you use Instantiate. You can certainly have classes with constructors, just not directly in scripts inheriting from MonoBehavior, which happens to be all the top-level scripts assigned to any GameObject. Those scripts can, however, call constructors for other objects all they want:

// Assume this reference has been assigned in the editor.
private GameObject _zombie;
void Start()
  // Create a new instance of that game object. This can be
  // a prefab from your project or object already in scene.
  Instantiate(zombie, transform.position, Quaternion.identity);

Particle Effects

If you want flashing stars, dust, snow, explosions, fire, mist from a waterfall, blood effects or a number of other effects, you use a particle effect. There’s an old particle system in Unity and a newer, more optimized one called Shuriken. You can do so many amazing things with Shuriken in Unity, including having your falling particles support collisions. Because there are many tutorials out there, such as the one at bit.ly/1pZ71it, and they’re typically created in the editor with the designer, here I’ll just show how they can be instantiated when, say, a character enters the trigger region of a coin to collect.

To get started with particles, simply go to the Game Object | Particle System menu and you’ll immediately see one added to your scene, as in Figure 8.

A Particle Effect
Figure 8 A Particle Effect

I like to create prefabs (which I covered in the second article) from my particle systems so I can easily reuse them, and I can then easily instantiate them via code by first assigning the script to a game object (assuming it’s in a class that derives from MonoBehavior, as all game object script components are) and then, in the editor, dragging a particle effect from my scene or a prefab in my project onto, for example, the exposed SmokeEffect property in Figure 9.

Figure 9 The Exposed SmokeEffect Property

private ParticleSystem _smokeEffect;
void OnTriggerEnter(Collider collider)
  // Ensure you only show particles if the player comes within your zone.
  if (collider.gameObject.tag == "Player")
    // Create particle system at the game objects position
    // with no rotation.
    Instantiate(_smokeEffect, transform.position, Quaternion.identity);
    // Don’t do: Destroy(this) because "this"
    // is a script component on a game object, so use
    // this.gameObject, that is, just gameObject.

Creating a UI

Unity 4.6 added a brand new UI system for creating heads-up displays in game elements using text, panels, widgets and more. Adding text to your game’s display is simply a matter of clicking on GameObject | UI | Text and setting the font and the text. If you want to control that later via code to perhaps update a score, you simply use:

// Gets the UnityEngine.UI.Text component.
  var score = GetComponent<Text>();
  score.text = "Score:0";

If I want an image in my UI, I simply click on GameObject | UI | Image and assign a 2D sprite image to this new component. I can set these values just as with any other game object. I hope you see a pattern by now. To create a simple GUI, create the UI objects via the GameObject | UI menu, set the initial values in the Editor and control them later by getting references to those UI components and setting the values, or even animating the values. I built a basic GUI, shown in Figure 10, by creating elements underneath a new Canvas component. The new Unity 4.6 UI system contains a number of basic object types, such as Panel, Button, Text, Image, Slider, Scrollbar, and Toggle, and it’s incredibly easy to anchor them, scale them, and drag and drop them to create a UI.

A UI with an Image and Heads-up Text
Figure 10 A UI with an Image and Heads-up Text

AI in Your Game

It wouldn’t be fair not to mention AI, though I won’t get into creating AI here (even though the building blocks for it are in the earlier code samples for find/move/rotate). But I’ll mention a few options that are available to you. I hesitate to call AI in a game AI, because it’s not so much intelligence as just a very basic action. I showed you how to have a transform rotate toward another object and move that object. That’s the basic AI in many games. Unity has some built-in path-finding capabilities with its NavMesh support, which calculates ahead of time all the paths around objects. NavMesh works pretty well and is now included in the free edition of Unity, although many choose instead to use the A* Pathfinding Project (arongranberg.com/astar), which is an algorithm you can either implement yourself, or save yourself the time by purchasing an asset package for it. As of this writing, 2D pathfinding support isn’t built into Unity, only 3D, although A* does have that capability. Behave 2.0 from AngryAnt is a popular AI plug-in for Unity with some really strong features, and there’s also RAIN, a free AI toolkit from rivaltheory.com, which is also pretty decent and has built-in behaviors for follow, find, Mecanim integration and more.

Wrapping Up

The 3D world adds an extra layer of complexity over 2D as it deals with full meshes and one more dimension. The Asset Store is absolute key for beginners and advanced alike, and you can really get off to a quick start by using pre-created assets.

When I started developing games, I went crazy finding so many models and textures on the Internet. There are some great asset marketplaces out there, but you’ll quickly find they aren’t all good for games. I once downloaded a small boulder that had near 100,000 vertices in its model! Look for assets that are mobile-optimized, or check out the vertex/polygon count to ensure you find ones that can work for your games. Otherwise, they can slow your performance down considerably. There are optimization tools you can use on models, including one for Unity called Cruncher. In the next article, I’ll discuss how to take a game or app from Unity over to the Windows platform. Check out my Channel 9 blog (aka.ms/AdamChannel9) for some videos and links to content to download.

Adam Tuliper is a senior technical evangelist with Microsoft living in sunny Southern California. He’s an indie game dev, co-admin of the Orange County Unity Meetup and a Pluralsight.com author. He and his wife are about to have their third child, so reach out to him while he still has a spare moment at adamt@microsoft.com or on Twitter at twitter.com/AdamTuliper.

Thanks to the following technical experts for reviewing this article: Matt Newman (Subscience Studios) and Tautvydas Žilys (Unity)