This article shows you how to configure a scene using touch targets to control the first person controller (FPC)
When configuring touch targets to control other elements of a scene, it’s important to minimize the screen space that the controlling elements occupy. In this way, you can devote more of the Ultrabook™ device’s viewable screen area to displaying visual action and less to user interaction. One means of accomplishing this is to configure the touch targets to handle multiple gesture combinations, eliminating the need for more touch targets on the screen. An example is the continual tapping of a graphical user interface (GUI) widget, causing a turret to rotate while firing, instead of a dedicated GUI widget for firing and another for rotating the turret (or another asset in the Unity* 3D scene).
This article shows you how to configure a scene using touch targets to control the first person controller (FPC). Initially, you’ll configure the touch targets for basic FPC position and rotation; then, augment them for additional functionality. This additional functionality is achieved through existing GUI widgets and does not require adding geometry. The resulting scene will demonstrate Unity 3D running on Windows* 8 as a viable platform for handling multiple gestures used in various sequences.
Configure the Unity* 3D Scene
I begin setting up the scene by importing an FBX terrain asset with raised elevation and trees, which I had exported from Autodesk 3ds Max*. I then place an FPC at the center of the terrain.
I set the depth of the scene’s main camera, a child of the FPC, to −1. I create a dedicated GUI widget camera with an orthographic projection, a width of 1, and a height of 0.5 as well as Don’t Clear flags. I then create a GUIWidget layer and set it as the GUI widget camera’s culling mask.
Next, I place basic GUI widgets for FPC manipulation in the scene in view of the dedicated orthogonal camera. For the left hand, I configure a sphere for each finger. The left little sphere moves the FPC left, the left ring sphere moves it forward, the left middle moves it right, and the left index sphere moves the FPC backward. The left-thumb sphere makes the FPC jump and launches spherical projectiles at an angle of 30 degrees clockwise.
For the right-hand GUI widget, I create a cube (made square through the orthogonal projection). I configure this cube with a Pan Gesture and tie it to the MouseLook.cs script. This widget delivers functionality similar to that of an Ultrabook touch pad.
I place these GUI widgets out of view of the main camera and set their layer to GUIWidget. Figure 1 shows the scene at runtime, with these GUI widgets in use to launch projectiles and manipulate the position of the FPC.
Figure 1. FPC scene with terrain and launched spherical projectiles
The projectiles launched from the FPC pass through the trees in the scene. To remedy this, I would need to configure each tree with a mesh or box collider. Another issue with this scene is that the forward velocity is slow if I use the touch pad to have the FPC look down while pressing the ring finger to move the FPC forward. To resolve this issue, I limit the “look-down” angle when the “move forward” button is pressed.
Multiple Taps
The base scene contains an FPC that fires projectiles at a specified angle off center (see Figure 1). The default for this off-center angle is 30 degrees clockwise when looking down on the FPC.
I configure the scene to have multiple taps, initiated at less than a specified time differential, alter the angle at which the projectiles are launched, then launch a projectile. I can configure this behavior to increase the angle exponentially with the number of taps in the sequence by manipulating float variables in the left-thumb jump script. These float variables control the firing angle and keep track of the time since the last projectile was launched:
1
privatefloattimeSinceFire = 0.0f;
2
privatefloatfiringAngle = 30.0f;
I then configure the Update loop in the left-thumb jump script to decrement the firing angle if the jump sphere tap gestures are less than one-half second apart. The firing angle is reset to 30 degrees if the taps are greater than one-half second apart or the firing angle has decremented to 0 degrees. The code is as follows:
This code produces a strafing effect, where continuous tapping results in projectiles being launched while decrementing the angle at which they’re launched (see Figure 2). This effect is something you can let a user customize or make available at specific conditions in a simulation or game.
Figure 2.Continuous taps rotate the heading of the launched projectile.
Scale Followed by Pan
I configured the square in the lower right of Figure 1 to function similarly to a touch pad on a keyboard. Panning over the square doesn’t move the square but instead rotates the scene’s main camera up, down, left, and right by feeding the FPS’s MouseLook script. Similarly, a scaling gesture (similar to a pinch on other platforms) that the square receives doesn’t scale the square but instead alters the main camera’s field of view (FOV), allowing a user to zoom in and out on what the main camera is currently looking at (see Figure 3). I will configure a Pan Gesture initiated shortly after a Scale Gesture to return the FOV to the default of 60 degrees.
I configure this function by setting a Boolean variable—panned—and a float variable to hold the time since the last Scale Gesture:
1
privatefloattimeSinceScale;
2
privatefloattimeSincePan;
3
privateboolpanned;
I set the timeSinceScale variable to 0.0f when a Scale Gesture is initiated and set the panned variable to True when a Pan Gesture is initiated. The FOV of the scene’s main camera is adjusted in the Update loop as follows in the script attached to the touch pad cube:
Following are the onScale and onPan functions. Note the timeSincePan float variable, which prevents the FOV from being constantly increased when the touch pad is in use for the camera:
var local = newVector3(transform.InverseTransformDirection(target.WorldDeltaPosition).x, transform.InverseTransformDirection(target.WorldDeltaPosition).y, 0);
Figure 3. The scene’s main camera “zoomed in” on distance features via the right GUI touch pad simulator
Press and Release Followed by Flick
The following gesture sequence increases the horizontal speed of the FPC when the left little sphere receives press and release gestures followed by a Flick Gesture within one-half second.
To add this functionality, I begin by adding a float variable to keep track of the time since the sphere received the Release Gesture and a Boolean variable to keep track of the sphere receiving a Flicked Gesture:
1
privatefloattimeSinceRelease;
2
privateboolflicked;
As part of the scene’s initial setup, I configured the script attached to the left little sphere with access to the FPC’s InputController script, which allows the left little sphere to instigate moving the FPC to the left. The variable controlling the FPC’s horizontal speed is not in the InputController but in the FPC’s CharacterMotor. Granting the left little sphere’s script to the CharacterMotor is configured similarly as follows:
This code gives the user the ability to increase the horizontal movement speed of the FPC by pressing and releasing the left little sphere, and then flicking the left little sphere within one-half second. You could configure the ability to decrease the horizontal movement speed in any number of ways, including a Flick Gesture following a press and release of the left index sphere. Note that the CHCharacterMotor.movement method contains not only maxSidewaysSpeed but gravity, maxForwardsSpeed, maxBackwardsSpeed, and other parameters. The many TouchScript gestures and geometries receiving them used in combination with these parameters provide many options and strategies for developing touch interfaces to Unity 3D scenes. When developing touch interfaces for these types of applications, experiment with these many options to narrow them to those that provide the most efficient and ergonomic user experience.
Issues with Gesture Sequences
The gesture sequences that I configured in the examples in this article rely heavily on the Time.deltaTime function. I use this differential in combination with the gestures before and after the differential to determine an action. The two main issues I encountered when configuring these examples are the magnitude of the time differential and the gestures used.
Time Differential
The time differential I used in this article is one-half second. When I used a smaller magnitude of one-tenth second, the gesture sequences weren’t recognized. Although I felt I was tapping fast enough for the gesture sequence to be recognized, the expected scene action did not occur. This is possibly the result of the hardware and software latency. As such, when developing gesture sequences, it’s a good idea to keep in mind the performance characteristics of the target hardware platforms.
Gestures
When configuring this example, I originally planned to have Scale and Pan Gestures followed by Tap and Flick Gestures. Having the Scale and Pan Gestures functioning as desired, I introduced a Tap Gesture, which caused the Scale and Pan Gestures to cease functioning. Although I was able to configure a sequence of Scale followed by Pan, this is not the most user-friendly gesture sequence. A more useful sequence may consist of another geometry target in the widget to accept the Tap and Flick Gestures after the Scale and Pan Gestures.
I used the time differential of one-half second in this example as the break point for actions taken (or not taken). Although it adds a level of complexity to the user interface (UI), you could configure this example to use multiple time differentials. Where Press and Release Gestures followed by a Flick Gesture within one-half second may cause horizontal speed to increase, the Press and Release Gestures followed by a Flick Gesture between one-half and 1 second may decrease the horizontal speed. Using the time differentials in this manner not only offers flexibility for the UI but could be used to plant “Easter eggs” within the scene itself.
Conclusion
The gesture sequence scene I configured for this article uses Unity 3D with TouchScript on Ultrabook devices running Windows 8. The sequences implemented are intended to reduce the amount of touch screen area required for the user to interact with the application. The less touch screen area dedicated to user interaction, the more area you can dedicate to more visually appealing content.
When I wasn’t able to get a gesture sequence to perform as desired, I was able to formulate an acceptable alternative. Part of this performance tuning was adjusting the Time.deltaTime differential to get a gesture sequence to perform as desired on the hardware available. As such, the Unity 3D scene I constructed in this article shows that Windows 8 running on Ultrabook devices is a viable platform for developing apps that use gesture sequences.
About the Author
Lynn Thompson is an IT professional with more than 20 years of experience in business and industrial computing environments. His earliest experience is using CAD to modify and create control system drawings during a control system upgrade at a power utility. During this time, Lynn received his B.S. degree in Electrical Engineering from the University of Nebraska, Lincoln. He went on to work as a systems administrator at an IT integrator during the dot com boom. This work focused primarily on operating system, database, and application administration on a wide variety of platforms. After the dot com bust, he worked on a range of projects as an IT consultant for companies in the garment, oil and gas, and defense industries. Now, Lynn has come full circle and works as an engineer at a power utility. Lynn has since earned a Masters of Engineering degree with a concentration in Engineering Management, also from the University of Nebraska, Lincoln.