Third person games can be exciting to play, and are typically easy to control because of the range of view the player has. Below is an example of a third person “cover” mechanic:
The problem is that it’s not entirely realistic. If a person were behind a pillar, they wouldn’t have access to that amount of information about the environment on the other side of the pillar. This is what often makes first-person games more immersive, and when they are shooters, more stressful and difficult to master.
My idea is to split the difference. A game can still be designed as a third person game, but can deal with this problem of available visual information. In the clip above, imagine that, after the player zooms out and returns to behind the pillar, everything beyond that pillar (so, anything that isn’t in the characters new field of vision from behind the pillar), becomes obscured in some way or another. Maybe as soon as they retreat to the pillar, everything that was once clear starts to “melt” into a blurry, unfocused memory of what was once there. Maybe for example:
Obviously, it wouldn’t look like a crappy Photoshop filter. And, in motion, much more interesting and dynamic. Maybe certain aspects of the environment can be memorized over time as you spend time in the room, such as key structural/architectural pieces of the room. And those pieces would appear more clearly, but things like enemies and other dynamic elements would be clear only while the player was actually looking at them. Enemies, obviously, are constantly moving around, so won’t ever be clearly seen after the character retreats from looking. But potentially other elements could come into play, like using sound indication and communication with other players to help create a “ghost” of where the enemy likely is located.
The idea is that it is attempting to recreate that feeling of not looking directly at something but still knowing, in a broad sense, what it looks and feels like.
[ Today I Was Playing: NomNomGalaxy ]
January 17, 2016