Game Development Reference
In-Depth Information
By using touch screens for in-play control of objects, you can avoid an additional phys‐
ical model while retaining realism. It would also be interesting to see games use realistic
touch screen interfaces so that a character would have to remove his gloves to use a
capacitive screen. Lastly, the exotic screen technologies mentioned earlier provide many
creative avenues of modeling those types of screens in games. For example, for screens
measuring sound waves in the glass or other mechanical energy, low-grade explosions
could be used to trigger these in-game input devices.
Difference from Mouse-Based Input
One important consideration for game developers in regards to touch screens is the
difference from traditional mouse- and keyboard-based gaming. As console game de‐
velopers have long been aware, it is hard to compete with the speed and accuracy of the
mouse/keyboard combination. Many first-person shooters segregate their online gam‐
ing between controllers and mouse/keyboard setups, as the accuracy and speed of the
mouse gives those players an unfair advantage. Upon using touch screens on many
different gaming devices and mobile computing platforms, we feel that this advantage
is even more pronounced.
A touch by a finger is an elliptical shape whose contact patch depends on the specific
finger being used, the pressure applied, and the orientation of the finger. The user gen‐
erally perceives the point of touch to be below where the actual center of contact is, so
adjustments must be made. This is generally all handled automatically by the operating
system so that a single touch point is computed and handed to the game via an API.
However, this generic approach to computing touches must obviously sacrifice accuracy
for universality so that it is not calibrated for one specific user.
Another inherent drawback to touch screens is the need to touch the screen. This means
a large portion of your hand will be blocking the screen when you are controlling that
element. One can imagine that in a first-person shooter, this would be a great disad‐
vantage over someone who is playing with a keyboard and mouse.
Lastly, mouseover is not available to touch-screen-based input. Consider a game where
you would trigger actions by merely moving a mouse cursor over an object. These
actions could be distinct from clicking on the same object. However, with touch-screen-
based input, that object would be obscured by whatever is triggering the screen, there‐
fore rendering the mouseover action invisible to the user.
Custom Gestures
As a last note, another possibility for touch input to a game is the use of custom gestures.
These allow the user to draw a shape on the screen that the program recognizes as a
gesture. It can then execute arbitrary code based on that input. As this is more pattern
recognition then physics, we won't cover it here, but we can recommend the topic
Designing Gestural Interfaces by Dan Saffer (O'Reilly) as a detailed look at this subject.
Search Nedrilad ::

Custom Search