Thesis research week 5 – The 12 Principles of Animation in the 3D Game Animation

As a form of animation performance, the law of animation motion, in which artistic expression and technical operability exist in parallel and are indispensable. It is a law which brings together science, artistry, and technology, and it exists as a regular summary of animation movement. The 12 principles of animation are the most well-known laws of motion, and the connection between game animation and it is also inseparable. This chapter focuses on the specific embodiment of 3d game animation under the 12 principles of animation.

1. Squeeze and stretch

Unlike traditional film and television animation, many video game engines do not support scaling of bones due to random-access memory limitations. However, even if the model is not deformable, this principle is still important. Under the premise that the model cannot be squeezed and stretched, most of the characters in realistic games emphasize the stretched posture by stretching their limbs during fast movements, such as jumping, taking off and landing. In cartoony games such as Overwatch and League of Legends, animators still frequently use this principle to achieve the character’s rapid actions, such as drawing a gun.


Figure 5 Overwatch character McCray’s gun-drawing action

2. staging

This principle does not directly apply to gameplay animation. Only appear in linear portions of games such as cinematics, where the camera and/or characters are authored by the animator.

(Figure 6 Staging in the Gears of War)

3. anticipation

The anticipation is the prelude of the main body’s action, which indicates the direction, strength, size and speed of the main body’s movement. The animator needs as many movements as possible to support the rationality of the character’s actions. If there are too few actions, then the desired action, such as a sprint or a sword swing, will lose weight, but if the time is too long, the player will feel unresponsive. This is not only an aesthetic part, but also an important part of player’s feedback.

4. Straight Ahead and Pose to Pose

In cinematic animation, there is no need to work in the “straight ahead”, and the production is more refined. So pose to pose is the preferred method for most game animations. This is mainly because as the game design progresses, the animation is likely to be changed or even deleted. Keyframe game animations will continue to require iteration, and it is much easier to use rough key pose animations than fully completed animations. Therefore, it is important for game animators not to feel precious to their work. Keeping something in a pose to pose or unfinished state for as long as possible can not only reduce waste, but also allow animators to create more rough versions of animations so that they can eventually mix many animations together to create better and smoother game characters than a single exquisite animation. However, when using motion capture, all of this disappears, and animator basically uses the middle motion as a starting point, then adds key poses and retimes the timing from there.

5. Follow-Through and Overlapping Action

When an object moves, all parts of the object follow the motion at the same time. The movement of the main object will affect the subsidiary objects. Overlapping action cover the concept that different parts of the body will move at different rates. During the boxing process, the head and torso will dominate the movement, with the curved arms dragging back, and the arms bouncing forward before the collision to provide a blow. A common mistake most junior animators make is to have all the elements of a character start or arrive at the same time, which seems unnatural. Although related to expectations, it describes what happens in the action area (contrary to expectations). This includes restoring balance from a jump, or embedding a heavy sword or axe into the ground and being heavily lifted back onto the character’s shoulder. It also includes the movement of secondary items such as cloth and hair in the initial action. Overlapping action is a good way to interpret the weight of an object or character, maintaining a strong pose at this stage will really help the player to read the action better.

6. Slow in and Slow out

This principle describes the visual effects of acceleration and deceleration of movement elements, that is, when the action starts and finishes, due to the weight of the object, the action usually has a slower movement at the beginning and ending. This concept can be easily visualized by moving a sphere in a position. Uniform/straight-line movement will make the sphere move the same distance in each frame, and slow in and slow out will make the position of the sphere gradually approach the starting point and ending point as the sphere’s speed rises and falls.

The important thing is that not everything needs to slow in and slow out. “Therefore, there is once again a conflict between the player’s desire for gameplay to react immediately and the desire for artistry to give the character weight (Jonathan 2019). ” For example, a sword that swings immediately looks very light, so the task of the game animator is to increase the weight at the end of the subsequent game. When the character and the sword return to the idle state, the action can be fast, but it must follow slow in and slow out. Game animators often exaggerate the recoil of a pistol to show its relative power and damage as a weapon in the game, while maintaining the immediate response and feedback of shooting.

7. Arcs

When the elements of the object or character move, most of the action curves naturally present a circular arc, such as the swing of the arms and legs when walking. The part that deviates from the natural curve will be caught by the eye and look unnatural, so the arc is a good way to hone the grace and correctness of the action. Most of the cleanup work to make motion capture work in games is to remove abnormal breaks from the arcs that are naturally generated in human motion, but when repeated in video games, it may seem very obvious and wrong, on the contrary, animating each element of the character so that it follows a clean arc, which can look light or floating when nothing attracts the eye. Just like with overlapping actions, as with most general rules, knowing when to break smoothing will add a higher level of detail to the animation and make it more realistic.

8. Secondary Actions

Secondary actions are used to complement and emphasize the main actions of the character, adding additional details and visual effects to the basic actions. Although it is difficult to include multiple actions in many game animations because they are very concise (secondary action must support and not interfere with the appearance of the primary action), it is these small details that can make a good animation great.

Examples of secondary actions include changes in facial expressions that accompany battle or injury animations, and fatigue reactions that occur when running for a long time. In game animations, additional animations and partial animations are allowed to combine actions on top of basic actions to provide secondary actions that are longer than a individual animation required for player control.

9. Appeal

Every animator should aim for attraction when giving life to a character, but this cannot be expressed in words, so it is difficult to describe. This is the difference between an animated face that can depict real emotions and a face that looks scary. “It is the sum of an animator’s skill in selling the force of a combat action versus a movement that comes across as weak. It is the believability in a character’s performance compared to one that appears robotic and unnatural (Jonathan, 2019). “

Appeal is a magical element that makes players believe that the character they are interacting with, no matter it is in a stylized or a realistic spectrum, and is not to be confused with likeability or attractiveness, because even the player’s enemies must also look aesthetically pleasing and show appeal. This is mainly due to the character design, just like an animator manipulates the character, where the proportion and color division are the first steps in the multi-stage creation process. Through animation and final rendering, the character is as appealing as possible, the simplicity of visual design and the posture of the animation part help to improve the readability of the action, and the clear outline can distinguish different characters.

10. Timing

Timing is the core of the sense of animation, usually used to convey the weight of a character or object. In essence, the timing principle is related to speed, the time required for an object to move or rotate for a distance or angle will make the viewer feel how heavy or powerful the movement is.

This is why every available animation curve editor displays a distance axis and a time axis as the main input for the animator to visualize the speed of the operation they are performing. For example, if animator pose a character with arms outstretched, the impulse in 2 frames is faster than the impulse in 5 or more frames.

With reference to slow in and slow out, proper timing ensures that the character or object obeys the laws of physics. The faster the action, the lighter the weight, and vice versa.

In addition, the timing of the reaction gives the individual time to breathe, such as holding a posture after swinging the sword before the next swing, so that the player can see the working process of the game character.

11. Exaggeration

Real life never seems real enough. If people want to watch the action performed by a real person, such as jumping from a height and landing on the floor, and accurately copy it into animation, the character may look slow and unsightly. Real movements will not follow a perfect arc, nor does it create attractive or powerful silhouettes. In animation, we hope to create hyper-realism and better present the existence in real life. Especially in game animation, we usually have to create actions that look great from all angles, not just from the fixed camera perspective of traditional linear media. That is why one of the best tools in the animator’s Toolkit is to exaggerate what already exists. When referencing the action, the animator must reinterpret the action in a “super-real” way, emphasizing the posture and keeping it longer than in reality.

Care must be taken to ensure the consistency of all exaggeration levels throughout the project, and this is mainly maintained by the animation leader or director, because the exaggeration level is a style choice, and the inconsistency between actions (or between animators) will

stand out and be unattractive when players play the game in the whole game.

12. Solid drawings

Although it may not seem relevant at first glance in the age of 3D animation, it must be remembered that drawing is a basic method of conveying information among team members,   and using thumbnails to explain problems or find solutions happens almost every day in the game design team.

All extraordinary animators can easily draw design concepts, and this skill is especially useful in the early stages when designing characters to illustrate the advantages and disadvantages of specific visual elements. Although it is no longer done on the paper, when animating a character in 3D to help pose and understand the limitations and working principles of body mechanics, an understanding of volume and three-dimensionality is still essential for animators.

Posted in Thesis | Leave a comment

FMP 13 Hunter Idle Animation

Breathing/idle Animation is one of the necessary animations in the game. When the player does not operate the character, idle animation is often needed to reflect the vitality of the character. What I want to make this week is idle animation.  The following is the idle animation reference of the game I found before I started production.

Warrior’s idle
blocking

When the character is on standby, it is not enough to show the character’s breathing. I also added the action of raising the knife to enrich the state of the character.

This is the effect rendered by my first version. I found that the swing of the sword is too large because this is a very heavy weapon. When the character is standing and breathing, such a large swing is obviously unreasonable.
final version
Posted in FMP | Leave a comment

Thesis research week 4 – Different Areas of Game Animation

While game animators in larger teams typically specialize, those at smaller studios may wear the many hats listed below. Regardless, even when specializing, it is incredibly valuable to understand other areas of game animation to open up opportunities for creativity across disciplines—often, the best results occur when lines are blurred such that an animator might excel in all moving aspects of a game.

Player Character Animation

The primary and easily most challenging aspect of game animation is the motion of characters under the player’s control. This occurs in all but the most abstract of games and therefore is an important skill to focus on and for any game animator to have under his or her belt.Character animation style and quality can vary greatly across different game types (and studios), depending upon their unique goals, but one thing is becoming more apparent as the medium progresses—bad character animation is unacceptable these days. Bringing up the baseline standard is one of the main goals of this book.

Facial Animation

A relatively recent requirement, (due to advances in the quality of characters enabling us to bring cameras in close), is that even the most undiscerning player will be able to instinctively critique bad motion due to experience with other humans.

How do we avoid these pitfalls when aiming to create believable characters that serve our storytelling aspirations? There are many decisions throughout a project’s development that must work in concert to bring characters to life that are not just believable, but appealing.

Cinematics & Cutscenes

A mainstay of games with even the slightest degree of storytelling, cinematic cutscenes give developers the rare opportunity to author scenes of a game enough so that they play out exactly as they envision. A double-edged sword, when used sparingly and done well, they can bring us much closer to empathizing with characters, but used too much and they divorce us from not just our protagonists but the story and experience as a whole.A well-rounded game animator should have a working knowledge of cinematography, staging, and acting to tell stories in as unobtrusive and economical a manner as possible.

Technical Animation

Nothing in games exists without some degree of technical wrangling to get it working, and game creation never ceases to surprise in all the ways it can break. A game animator should have at least a basic knowledge of the finer details of character creation, rigging, skinning, and implementation into the game—even more so if on a small team where the animator typically owns much of this responsibility alone.A game animator’s job only truly begins when the animation makes it into the game—at which point the systems behind various blends, transitions, and physical simulations can make or break the feel and fluidity of the character as a whole.

Nonplayer Characters

While generally aesthetically similar, the demands of player animations differ greatly from those of nonplayer characters (NPCs). Depending on the goals and design requirements of the game, they bring their own challenges, primarily with supporting artificial intelligence (AI) such as decision-making and moving through the world. Failing to realize NPCs to a convincing degree of quality can leave the player confused as to their virtual comrades’ and enemies’ intentions and believability.

Cameras

The camera is the window through which the game world is viewed. Primarily concerning player character animation in 3D games, a bad camera can undermine the most fluidly animated character. A good game animator, while perhaps not directly controlling the implementation, should take a healthy interest in the various aspects of camera design: how it reacts to the environment (colliding with walls, etc.), the rotation speed and general rigidity with which it follows player input, and the arc it takes as it pivots around the character in 3D games. It’s no wonder a whole new input (joypad right-stick) was added in the last decade just to support the newly required ability to look around 3D environments.

Environmental and Prop Animation

Although it may not be as glamorous as character animation, the environment can bring soulless places to life. In addition, the interactive gameplay of characters, props, and environment is an essential part of the game, which can make the player immersed in the virtual game world well.
The use of weapons (mainly guns and melee types) is the backbone of the game, and the knowledge required to produce and maintain these types of animations effectively and convincingly is an important part of the animation process of most games. And they are all essential for players to discover a more interactive world.

Posted in Thesis | Leave a comment

Thesis research week 3 – The Five Fundamentals of Game Animation

The 12 animation principles are a great foundation for any animator to understand, and failure to do so will result in missing some of the underlying fundamentals of animation—visible in many a junior’s work. Ultimately, however, they were written with the concept of linear entertainment like TV and film in mind, and the move to 3D kept all of these elements intact due to the purely aesthetic change in the medium. Three-dimensional animated cartoons and visual effects are still part of a linear medium, so they will translate only to certain elements of video game animation—often only if the game is cartoony in style. As such, it’s time to propose an additional set of principles unique to game animation that don’t replace but instead complement the originals. These are what I have come to know as the core tenets of our new nonlinear entertainment medium, which, when taken into consideration, form the basis of video game characters that not only look good, but feel good under player control—something the original 12 didn’t have to consider. Many elements are essential in order to create great game animation, and they group under five fundamental areas:
1.Feel
2.Fluidity
3.Readability
4.Context
5.Elegance

Feel

The single biggest element that separates video game animation from traditional linear animation is interactivity. The very act of the player controlling and modifying avatars, making second-to-second choices, ensures that the animator must relinquish complete authorship of the experience. As such, any uninterrupted animation that plays start to finish is a period of time the player is essentially locked out of the decision-making process, rendered impotent while waiting for the animation to complete (or reach the desired result, such as landing a punch).The time taken between a player’s input and the desired reaction can make the difference between creating the illusion that the player is embodying the avatar or becoming just a passive viewer on the sidelines. That is why cutscenes are the only element in video games that for years have consistently featured a “skip” option—because they most reflect traditional noninteractive media, which is antithetical to the medium.

Response

Game animation must always consider the response time between player input and response as an intrinsic part of how the character or interaction will “feel” to the player. While generally the desire is to have the response be as quick as possible (fewer frames), that is dependent on the context of the action. For example, heavy/stronger actions are expected to be slower, and enemy attacks must be slow enough to be seen by the player to give enough time to respond.
It will be the game animator’s challenge, often working in concert with a designer and/or programmer, to offer the correct level of response to provide the best “feel,” while also retaining a level of visual fidelity that satisfies all the intentions of the action and the character. It is important not to sacrifice the weight of the character or the force of an action for the desire to make everything as responsive as possible, so a careful balancing act and as many tricks as available must be employed.
Ultimately, though, the best mantra is that “gameplay wins.” The most fluid and beautiful animation will always be cut or scaled back if it interferes too much with gameplay, so it is important for the game animator to have a player’s eye when creating response-critical animations, and, most importantly, play the game!

Inertia & Momentum

Inertia is a great way to not only provide a sense of feel to player characters, but also to make things fun. While some characters will be required to turn on a dime and immediately hit a run at full speed, driving a car around a track that could do the same would not only feel unrealistic but mean there would be no joy to be had in approaching a corner at the correct speed for the minimum lap time. The little moments when you are nudging an avatar because you understand their controls are where mastery of a game is to be found, and much of this is provided via inertia.
Judging death-defying jumps in a platform game is most fun when the character must be controlled in an analogue manner, whereby they take some time to reach full speed and continue slightly after the input is released. This is as much a design/programming challenge as it is animation, but the animator often controls the initial inertia boost and slowdown in stop/start animations.

Momentum is often conveyed by how long it takes a character to change from current to newly desired directions and headings. The general principle is that the faster a character is moving, the longer it takes to change direction via larger turn-circles at higher speeds or longer plant-and-turn animations in the case of turning 180°.Larger turn-circles can be made to feel better by immediately showing the intent of the avatar, such as having the character lean into the turn and/or look with his or her head, but ultimately we are again balancing within a very small window of time lest we render our characters unresponsive.
A classic example is the difference between the early Mario and Sonic the Hedgehog series. Both classic Mario and Sonic’s run animations rely heavily on inertia and have similar long ramp-ups to full speed. While Mario immediately starts cartoonishly running at full speed as his legs spin on the ground to gain traction, Sonic slowly transitions from a walk to a run to a sprint. While Mario subjectively feels better, this is by design, as Sonic’s gameplay centers on high speeds and “flow,” so stopping or slowing down is punitive for not maintaining momentum.

Visual Feedback

A key component of the “feel” of any action the player and avatar perform is the visual representation of that action. A simple punch can be made to feel stronger with a variety of techniques related to animation, beginning with the follow-through following the action. A long, lingering held pose will do wonders for telling the player he or she just performed a powerful action. The damage animation on the attacked enemy is a key factor in informing the player just how much damage has been suffered, with exaggeration being a key component here.
In addition, employing extra tricks such as camera-shake will help further sell the impact of landing the punch or gunshot, not to mention visual effects of blood or flashes to further register the impact in the player’s mind. Many fighting games employ a technique named “hit-stop” that freezes the characters for a single frame whenever a hit is registered. This further breaks the flow of clean arcs in the animations and reinforces the frame on which the impact took place.
As many moves are performed quickly so as to be responsive, they might get lost on the player, especially during hectic actions. Attacking actions can be reinforced by additional effects that draw the arc of the punch, kick, or sword-swipe on top of the character in a similar fashion to the “smears” and “multiples” of old. When a sword swipe takes only 2 frames to create its arc, the player benefits mostly from the arcing effect it leaves behind.
Slower actions can be made to feel responsive simply by showing the player that at least part of their character is responding to their commands. A rider avatar on a horse can be seen to immediately turn the horse’s head with the reins even if the horse itself takes some time to respond and traces a wide circle as it turns. This visual feedback will feel entirely more responsive than a slowly turning horse alone would following the exact same wide turn.

Much of the delay in visual feedback comes not from the animation alone, but the way different game engines handle inputs from the joypad in the player’s hands. Games like the Call of Duty series place an onus on having their characters and weapons instantly respond to the player’s inputs with minimal lag and high frame rates, whereas other game engines focused more on graphics postprocessing will have noticeably longer delays (measured in milliseconds) between a jump button-press and the character even beginning the jump animation, for example. This issue is further exacerbated by modern HDTVs that have lag built in and so often feature “Game Mode” settings to minimize the effect. All this said, it is still primarily an animator’s goal to make characters as responsive as possible within reason.

Fluidity

Rather than long flowing animations, games are instead made of lots of shorter animations playing in sequence. As such, they are often stopping, starting, overlapping, and moving between them. It is a video game animator’s charge to be involved in how these animations flow together so as to maintain the same fluidity put into the individual animations themselves, and there are a variety of techniques to achieve this, with the ultimate goal being to reduce any unsightly movement that can take a player out of the experience by highlighting where one animation starts and another ends.

Blending and Transitions

In classic 2D game sprites, an animation either played or it didn’t. This binary approach carried into 3D animation until developers realized that, due to characters essentially being animated by poses recorded as numerical values, they could manipulate those values in a variety of ways. The first such improvement that arrived was the ability to blend across (essentially cross-fading animations during a transitory stage) every frame, taking an increasing percentage of the next animation’s value and a decreasing percentage of the current as one animation ended and another began. While more calculation intensive, this opened up opportunities for increasing the fluidity between individual animations and removing unsightly pops between them.

A basic example of this would be an idle and a run. Having the idle immediately cancel and the run immediately play on initial player input will cause the character to break into a run at full speed, but the character will unsightly pop as he or she starts and stops due to the potential repeated nature of the player’s input. This action can be made more visually appealing by blending between the idle and run over several frames, causing the character to more gradually move between the different poses. Animators should have some degree of control over the length of blends between any two animations to make them as visually appealing as possible, though always with an eye on the gameplay response of the action.

The situation above can be improved further (albeit with more work) by creating brief bespoke animations between idle and run (starting) and back again (stopping), with blends between all of them. What if the player started running in the opposite direction he or she is facing? An animator could create a transition for each direction that turned the character as he or she began running in order to completely control the character’s weight-shift as he or she leans into the desired direction and pushes off with his or her feet. What if the character isn’t running but only walking? Again, the animator could also create multiple directional transitions for that speed. As you can see, the number of animations can quickly spiral in number, so a balance must be found among budget, team size, and the desired level of fluidity.

Seamless Cycles

Even within a single animation, it is essential to maintain fluidity of motion, and that includes when a cycling animation stops and restarts. A large percentage of game animations repeat back on themselves, so it is important to again ensure the player cannot detect when this transition occurs. As such, care must be taken to maintain momentum through actions so the end of the animation perfectly matches the start.It is not simply enough to ensure the last frame of a cycle identically matches the first; the game animator must also preserve momentum on each body part to make the join invisible. This can be achieved by modifying the curves before and after the last frame to ensure they create clean arcs and continue in the same direction. For motion-capture, where curves are mostly unworkable, there are techniques that can automatically provide a preservation of momentum as a cycle restarts that are described later in this book.

Settling

This kind of approach should generally be employed whenever a pose must be assumed at the end of an animation, time willing. It is rather unsightly to have a large movement like an attack animation end abruptly in the combat idle pose, especially with all of the character’s body parts arriving simultaneously. Offsetting individual elements such as the arms and root are key to a more visually pleasing settle.Notably, however, games often suffer from too quickly resuming the idle pose at the end of an animation in order to return control to the player to promote response, but this can be avoided by animating a long tail on the end of an animation and, importantly, allowing the player to exit out at a predetermined frame before the end if new input is provided. This ability to interrupt an animation before finishing allows the animator to use the desired number of frames required for a smooth and fluid settle into the following animation.
Settling is generally achieved by first copying the desired end pose to the end of an animation but ensuring some elements like limbs (even divided into shoulder and forearms) arrive at their final position at different times, with earlier elements hitting, then overshooting, their goal, creating overlapping animation. Settling the character’s root (perhaps the single most important element, as it moves everything not planted) is best achieved by having it arrive at the final pose with different axes at different times. Perhaps it achieves its desired height (Y-axis) first as it is still moving left to right (X-axis), causing the root to hit, then bounce past the final height and back again. Offsetting in the order of character root, head, and limbs lessens the harshness of a character fully assuming the end pose on a single frame—though care must be taken to not overdo overlap such that it results in limbs appearing weak and floppy.

Readability

After interactivity, the next biggest differentiator between game and traditional animation, in 3D games at least, is that game animations will more often than not be viewed from all angles. This bears similarity to the traditional principle “staging,” but animators cannot cheat or animate to the camera, nor can they control the composition of a scene, so actions must be created to be appealing from all angles. What this means is when working on an animation, it is not enough to simply get it right from a front or side view. Game animators must take care to always be rotating and approving their motion from all angles, much like a sculptor walking around a work.

Posing for Game Cameras

To aid the appeal and readability of any given action, it is best to avoid keeping a movement all in one axis. For example, a combo of three punches should not only move the whole character forward as he or she attacks, but also slightly to the left and right, twisting as they do so. Similarly, the poses the character ends in after every punch should avoid body parts aligning with any axes, such as arms and legs that appear to bend only when viewed from the side. Each pose must be dynamic, with lines of action drawn through the character that are not in line with any axes.

Collision & Center

of Mass/Balance As with all animation, consideration must be given to the center of mass (COM; or center of balance) of a character at any given frame, especially as multiple animations transition between one another so as to avoid unnatural movements when blending. The COM is generally found over the leg that is currently taking the full weight of the character’s root when in motion or between both feet if they are planted on the ground when static. Understanding this basic concept of balance will not only greatly aid posing but also avoid many instances of motions looking wrong to players without them knowing the exact issue.

This is especially true when considering the character’s collision (location) in the game world. This is the single point where a character will pivot when rotated (while moving) and, more importantly, where the character will be considered to exist in the game at any given time. The game animator will always animate the character’s position in the world when animating away from the 3D scene origin, though not so if cycles are exported in place. Importantly, animations are always considered to be exported relative to this prescribed location, so characters should end in poses that match others (such as idles) relative to this position. This will be covered in full in the following chapter.

Context

Whereas in linear animation, the context of any given action is defined by the scene in which it plays and what has happened in the story up to that point and afterward, the same is impossible in game animation. Oftentimes, the animator has no idea which action the player performed beforehand or the setting in which the character is currently performing the action. More often than not, the animation is to be used repeatedly throughout the game in a variety of settings, and even on a variety of different characters.

Elegance

Game animations rarely just play alone, instead requiring underlying systems within which they are triggered, allowing them to flow in and out of one another at the player’s input—often blending seamlessly, overlapping one another, and combining multiple actions at once to ensure the player is unaware of the individual animations affording their avatar motion.If not designing them outright, it is the game animator’s duty to work with others to bring these systems and characters to life, and the efficiency of any system can have a dramatic impact on the production and the team’s ability to make changes further down the line toward the end of a project. Just as a well-animated character displays efficiency of movement, a good, clean, and efficient system to play them can work wonders for the end result.

Posted in Thesis | Leave a comment

FMP 12 Hunter Run Animation

In the last semester, I made a female run, this week I plan to make a male run. The difference between them is not very big, women emphasize the swing of the crotch, while men’s chest movement is more obvious.

Running Reference

The principle of running is the same as walking,I set the total number of frames to 24,so frame 12-15-18-21 is a mirror image of the frame 1-3-6-9. and frame 1 = frame 24.

frame 1
frame 4
frame 16
frame 24
This is the blocking of the basic pose of arms and feet and chest

Since the stepping movement is both up and down and left and right, so its motion trail will show a  shape——which called Figure 8.

The movement of the arm: It can be understood as a pendulum movement. It is worth noting that when the character run, the arms not only swing back and forth, but also have a certain swing range inside and outside.
this is the final render version
Posted in FMP | Leave a comment

Thesis research week 2- Motion Capture in 3D Game Animation

The application of skeleton animation in 3D games is becoming more and more common, and its standard mode is to use forward kinematics. In general, animators use animation software to make a series of animations required by the project in advance, and then when the game is running, the program should decide which animation to display according to logic and calculate the specific display according to time. This way of pre-made animation clips brings about a big problem: all animations are immutable. To make the game as realistic as possible, the animations that our game may need are rich and changeable, making all these animations may be unacceptable in terms of cost and time. To overcome this problem, the game industry introduced the technology of motion capture.

Regarding mocap, Jonathan gives a very comprehensive analysis in the eleventh chapter of his book. The following is the content of the book I intercepted

GAME ANIM: VIDEO GAME ANIMATION EXPLAINED – by Jonathan Cooper

Chapter 11

Arguably, the single largest innovation in game animation in the last few decades has been the widespread adoption of motion-capture (mocap for short)—the process of capturing motion from live actors.

Much was said about mocap in the early days along the lines of “It’s cheating,” “It’s not real animation,” and “It’ll replace animators and we’ll lose our jobs,” but one must only look a decade earlier to see the same fears vocalized by animators regarding the shift from 2D traditional animation to 3D computer animation. The idea then that a computer character could have the same life as a series of artfully hand-drawn images was incomprehensible to many at the time, but later proved possible when handled by talented animators, and the same is true now of mocap.The simple fact is that as video games have matured and their subject matter moves from simple cartoonlike characters and actions to more realistic renderings of human characters and worlds, the old approach of keyframing humans simply wasn’t cutting it visually, not to mention the sheer volume of animation required for a fluidly moving character with all its cycles, blends, and transitions would simply be impossible to create any other way.
That’s not to say that mocap can’t be a crutch, and when wielded incorrectly, the results are far from satisfying. There are some in production still who incorrectly believe that we shoot mocap then implement in the game and the job is done. A search for a “silver-bullet,” one-stop solution to capturing the subtleties of acting is ongoing, though this technology is merely yet another tool for animators to wield in their quest to bring characters to life. Mocap is but another method to get you where you want more quickly, and the real magic comes when a talented animator reworks and improves the movement afterward.

Do You Even Need Mocap?

For our hypothetical project, long before getting into the nitty-gritty of motion-capture production, a very important question to be asked should be whether to even use motion-capture or stick with a traditional keyframing approach. Here are some considerations to help answer this question.
1.What is the visual style of the game? A more realistic style benefits greatly from mocap, whereas mocap on highly stylized characters can look incorrect. Cartoony and exaggerated motion will help sell characters that are otherwise lacking in detail and visual fidelity, including those seen from afar. Mocap makes it easier to achieve realistic character motion.
2.Are our characters even humanoid? While some games have been known to mocap animals, the approach is typically used only for humans. If our main characters are nonhuman creatures or nonanthropomorphic objects, then mocap often isn’t even an option.
3.What kinds of motions will feature most in the game? If the characters are performing semirealistic motions such as running, jumping, climbing, and so on, then mocap will suit, whereas if every move is expected to be outlandish or something that no human could perform, then keyframing might suit better. The balance of these actions should determine the project’s adoption of mocap.
4.What is the scope of the game? Mocap gives the best value for the money when used on large projects with lots of motion, at which point the production cost of setting up a mocap shoot and the required pipeline is offset against the speed at which large quantities of character motion can be created. That said, cheaper yet lower-quality solutions are also becoming more readily accessible for smaller projects.
5.Would the budget even cover it? While affording an unparalleled level of realism, motion-capture shoots can be expensive. When compared to the cost of hiring additional animators to achieve the same quantity of motions via keyframe (depending on volume), the cost sometimes becomes more comparable, however.
6.What is the experience of the team? An animation team built over the years to create stylized cartoony games may take issue with having to relearn their craft, and attempts to adopt mocap may meet resistance. That said, motion capture does become a great way to maintain a consistent style and standard across animators.

How Mocap Works?

While not absolutely necessary for great results, an understanding of the mocap process will only aid the game animator in finding ways to speed up the pipeline and get motion capture in the game faster and to a higher quality in as little time as possible.

Different Mocap Methods

Optical Marker–Based
While there are several alternative motion-capture methods, the traditional and most commonly used is via the triangulation of optical markers on a performer’s suit, captured by arrays of cameras arranged around a stage so that they create a “volume” within which the performance can be recorded. This provides the highest quality of motion-capture but is also the most expensive.

These arrays of cameras can number anywhere between 4 to upward of 36, and highly reflective markers are tracked at higher frame rates than required by the game project (typically 120 frames per second). As long as no fewer than three cameras can simultaneously follow a marker, the software simulation model will not lose or confuse the markers for one another. When this does happen, the cleanup team (usually provided by the stage) will manually sort them again.

Accelerometer Suits
The performer dons a suit with accelerometers attached, which, when combined with a simulated model of human physics and behavior, provide data without the need for a volume of cameras. However, unless the animation team is prepared to work longer with the data, the results are far from the professional quality provided by marker capture. Accelerometer mocap is therefore useful for lower-budget projects or previz before real captures for larger ones. Depth Cameras A third and experimental approach is to use depth-sensing cameras with no markers, applying body motion only to a physical model. This provides the cheapest option of all, and has created interesting results for art installations that deal with more abstract representations of the body.

Depth cameras

may provide some decent reference-gathering and previz but are ultimately less than ideal for an actual videogame project due to the amount of work still required to make it visually appealing post-shoot. That said, the quality of all options is increasing at an encouraging rate.

Performance Capture

Perhaps the biggest breakthrough in increasing acting quality in recent years has been the introduction of performance capture. While motion capture refers to the recording of only the body, performance capture records the body, face, and voice all at once by using head-mounted cameras (HMCs) and microphones. Doing so adds a level of continuity in subtle facial acting that was simply impossible in the previous method of recording everything separately and recombining in a DCC.

While this method has become ubiquitous with cinematic cutscene shoots and head-cams are becoming more affordable, care must be taken to ensure all three tracks (body, face, and audio) remain in sync. As such, the mocap stage will generally provide time codes for each take, which must be maintained during the assembly and editing phase.While it used to be a requirement for real and virtual actors’ faces to match as much as possible in order to best retarget the motion to the facial muscle structure, current methods employ retargeting to digital doubles first, then translate to the desired face of your chosen video game protagonist, though the extra step naturally makes this process more costly at the benefit of freeing you up to cast the best actors regardless of how they look.
Due to the extra overhead of camera setup and calibration, cinematic shoots typically go much slower than the often rapid-fire process of in-game shoots, not to mention the reliance on scripts and rehearsals and multiple takes to get the action just right. While rehearsals can certainly benefit in-game shoots (most notably when requiring choreography such as during combat), they are less of an absolute necessity as is the case with cinematic cutscenes. The secret to successful cinematics isn’t a technology question, but ensuring the performance (and writing) are as good as possible. There’s only so much an animator can polish a flatly delivered line of awkward dialogue from a wrongly cast actor.
For the remainder of this chapter, we’ll be focusing on the first approach, (optical marker-based), as this is the most commonly used method at video game studios and therefore the most likely one an animator will consistently encounter in his or her career.

The Typical Mocap Pipeline

The typical workflow for optical marker–based mocap involves:

1.The actor arrives and, once suited up, is calibrated into the system matching his or her height and size (and therefore unique marker positions) with a character in the capture software.
2.The actor is directed on a stage to capture the desired motion. Either then, or later via a viewing software, the director decides upon which takes (and sometimes frame ranges) he or she wishes to purchase from the stage. See the “Directing Actors” section later in this chapter for some best practices when directing actors.
3.The stage crew then clean up the motion by fixing lost markers and smoothing out extraneous jerky motion due to marker loss or interference. This process can take anywhere from a few hours to a few weeks. The cleaned-up motion is delivered to the game studio as “takes.”
4.A technical animator at the game studio retargets the delivered data onto their in-game character, checking the quality of the delivered mocap and requesting redeliveries if the quality is off.
5.The animators then begin working on the mocap that is now driving their game characters. This usually consists of a mocap rig and control rig both driving the export skeleton, allowing the animator to trace motion back and forth and work in a nondestructive manner, adding exaggeration and appeal without destroying the underlying motion that has been bought and paid for. For details on this stage, the most involving step for the animator, see the section “Working with Mocap” at the end of this chapter.

Mocap Retargeting

Because actors rarely match the dimensions of the video game characters they are portraying, the studio must incorporate retargeting of the motion from the delivered actor-sized motion to the game character. There are a variety of settings to get the best possible translation between characters without issues that are difficult to fix at a later stage.
This process is generally performed in MotionBuilder, and the single biggest potential issue to be wary of is the use of “reach” to match limbs’ captured positions regardless of the difference between the source actor and game character.
Used generally for the feet to ensure they match the ground and prevent foot-sliding, reach can also be used for hands when it is essential they match the source position, such as when interacting with the environment like grabbing onto a ladder. However, leaving reach on for arms in general can be disastrous, as the hands will always match, causing the arms to bend or hyperextend unnaturally to maintain the source position.
At this stage the person tasked with retargeting should also keep an eye out for individual bad retargets or jerky motion where lost mocap markers weren’t correctly cleanup up, systemic issues that plague every delivered motion such as bent clavicles or spines, or loss of fine detail due to smoothing applied as default to all motions by the mocap studio.

Mocap Shoot Planning

The absolute worst thing an animator can do is arrive at the stage on the day unprepared. Here are several essential practices that will ensure as smooth and productive a shoot as possible.

Shot-List

A shot list is invaluable both on the day of shooting and in the run-up to the shoot, as it’s the single best way to evaluate everything you’ll need to capture, so it helps plan out the shoot. While the mocap stage will often provide their own formatting as they require a copy before the shoot day for their own preparation purposes, you can make a start yourself within Excel or Google Docs. Any shot list should contain these following columns:

•Number: Helps to count the number of shots and therefore make a time estimate.•Name: Shots should be named as per your file naming convention.
•Description: A brief explanation of the desired action lest you forget.
•Props: Which props will be required by the actors—even if not captured.
•Character: Essential for multicharacter shots for retargeting purposes.
•Set builds: Similar to props but rather walls, doors, and so on actors will be interacting with.
•Notes: Added on the day as required, describing directing notes such as preferred takes.

Ordering/Grouping Your Shots

There are a variety of factors that will determine the ordering of mocap shots, not least being the priority in which they’re needed for the game schedule to ensure work isn’t delayed should you fail to capture everything, which happens often. In addition, grouping multiple actions requiring the same set builds and props (especially if the props are captured) will ensure a good flow onstage.
Perhaps the largest time sink on any shoot day is the building of sets, so great consideration must be taken to try to capture everything required on one set build before it’s dismantled. It is wise to avoid capturing high-energy actions at the end of the day (or even right after lunch), as the actors will naturally be more tired then. Conversely, start the day with something fun like fast and easy rapid-fire actions that will build momentum and set you up for a great day of shooting.

Rehearsals

The most proven way of having an efficient mocap shoot while obtaining the highest quality of acting is rehearsing beforehand. Not only is it a great way to build a relationship with the actors, it also allows them to go deeper into any performance. While not needed for many in-game actions that can be quickly done on the day, rehearsing is essential for cinematic or story shoots that require deeper characterization and will likely have closer cameras that can highlight detail, especially for full facial performance capture. Something as subtle as a thoughtful pause or a tilt of the head can make the world of difference to a performance when an actor is fully invested. Giving them time to know and understand the characters they’re portraying is impossible without rehearsing.

Mocap Previz

Another excellent way to avoid costly wasted time on set (or afterward when you find the actions just won’t fit the scene) is to previsualize the actions before even going to the shoot. This way, not only can progress be made on the various gameplay and story scenarios without having to wait for shoot day to roll around, you’ll already have figured out many of the technical issues from within simple cost-effective scenes, such as how much of the set you’ll actually need to build based on touch points.Importantly, understand that the actors will likely improvise and suggest much better actions than your previz on the day of shooting, so it should be used as a guide and initial inspiration only, giving the director a better evaluation of what will and will not cause technical issues when new suggestions arise.

Working with Actors

While it can be physically exhausting to shoot mocap, sometimes over a series of days, working directly with actors can be one of the most rewarding aspects of video game animation as you work together to bring your characters and scenes to life. The improvisational collaboration, sometimes to an intimate level as you work together to create the perfect performance, can be a refreshing break from working at a computer screen.

CASTING

Props & Sets Props

are an essential part of any game mocap shoot, usually taking the form of weapons or other gameplay-related items. Importantly, props give actors some tangible “business” (something to play with in their hands or touch in the environment) to work with in cinematic story scenes and make the scene more natural.


Posted in Thesis | Leave a comment

Thesis research week 1

 Outline

Analysis of 3d game animation(Tentative)

Abstract

Key words:Game animation, mocap, Game scene, game soundtrack(Tentative)

1.What is game animation?

1.1 Definition of game animation:
The most basic production method is manual frame animation.
And another kind of procedural animation is more flexible and changeable. This kind of animation binds the action and the program together, and controls the actions of various parts of the character according to certain rules. Because the animation is calculated by the program, as long as you adjust the variables in the program, the animation will also change accordingly. In this way, the animation can be adjusted in real time according to the environment.

1.2.Different areas of game animation(The characteristics and types of game animation)

1.2.1 Player character animation

1.2.2 Facial animation

1.2.3 cinematic and cutscenes

1.2.4 technical animation

1.2.5 nonplayer character

1.2.6 cameras

1.2.7 environment and prop animation

1.3 The main types of game character animation design styles
1.3.1 The basic point of designing character actions
1.3.2 Determine the action style according to the performance style and rhythm of the overall game
1.3.3 Symbolic representation of action language

1.4 The difference between game animation and film animation

1.4.1 Compared with the performance of traditional action movies
1.4.2 Compared with the action performance of traditional animated films

1.5 Advantages and disadvantages of action design in game animation

1.6 Combination and similarities and differences between game action design and animation motion law
1.6.1 Create a sense of virtuality based on the principles of animation (reference)
What is a sense of virtuality?
1.6.2 7 principles for creating a sense of virtuality

1.7 Application of CG technology in game animation design

1.8 The difference between 2D game animation and 3D game animation production

2. the styles of game animation

2.1 Realistic game animation
Pursue visual reality, immersion, “interactive movie”

2.2 Cartoon (two-dimensional) game animation-take fortiche studio as an example
Rendering method (cel-shading)

3. motion capture in game animation

3.1Do you even need mocap?

3.2.How mocap works

3.3.The typical mocap pipeline

3.4.Mocap retargeting

3.5 Will motion capture replace key frame animation?

https://www.youtube.com/watch?v=t3zpWlc-tMs&t=408s

4.scene analysis in game animation

5.The characteristics and influence of the music score in the video game animation-take Zelda’s Breath of the Wild and Final Fantasy 14 as examples

6. the workflow of the game action designer

7.Summary

reference:

Character animation design based on motion capture data
The application of skeletal animation in games is becoming more and more common, and its standard mode is to use forward kinematics. one
Generally speaking, artists use animation generation software to make a series of animations that they need in advance. When the game is running, the program decides which animation to display based on logic, and calculates the specific display based on time. This way of making animation clips in advance brings about a big problem: all animation clips are immutable. In order to make the game as realistic as possible, the animations that our game may need are colorful, and make all of them. Animation may be unacceptable in terms of cost and time. In order to overcome this problem, the game industry introduced the technology of motion capture. Motion capture technology should have been applied in the film industry at first, but there may not be any other industry that applies this technology more commonly than the game industry. Motion capture technology does not solve all the problems. We cannot consider and make all possible animations in advance. This is not only due to cost considerations, but also technical reasons: although skeletal animation requires more memory than vertex animation Decline, but too much game character animation will still exhaust our memory resources, especially for video game consoles. Compared with personal computers, these game consoles have very limited memory resources. In this case, one solution is to use animation synthesis technology. Character animation is a combination of multiple actions, which is different from the technology of other animation clips. Animation synthesis technology has also been widely used in the game field. It is not only used to synthesize new dynamic animation clips, but also to smoothly connect two different animation clips. Brings convenience to our animation production. In the future of animation creation, animation synthesis technology will be more widely used[3]

Principles of Virtual Sensation

by sswink

What is the “feel” of a game? Every gamer knows it and can easily recall the sensation, the kinesthetic feeling, of controlling some virtual avatar or agent. It’s what causes you to lean left and right as you play, swinging your controller wildly as you try to get Mario to move just a little faster. It’s the feeling of masterfully controlling some object outside your body, making it an extension of your will and instinct. This “virtual sensation” is in many ways the essence of videogames, one of the most compelling, captivating, and interesting emergent properties of human-computer interaction.

The sensation of control is a complex and multifaceted phenomenon: so many things have to happen, both in the computer and in the player’s mind, for this powerful, compelling feeling to occur. How, then, can it be mastered and used as a tool to create better games?

This article explores the underlying principles that govern the “feel” of controlling something in a game. Just as the “Principles of Animation” guide good animation, these principles of virtual sensation are intended to guide good-feeling gameplay. For reference, the Principles of Animation are:

1.Squash and Stretch – Defining the rigidity & mass of an object by distorting its shape during an action.

2.Timing – Spacing actions to define the weight & size of objects & the personality of characters.

3.Anticipation – The preparation for an action.

4.Staging – Presenting an idea so that it is unmistakably clear.

5.Follow Through & Overlapping Action – The termination of an action & establishing its relationship to the next action.

6.Straight Ahead Action & Pose-To-Pose Action – The two contrasting approaches to the creation of movement.

7.Slow In and Out – The spacing of in-between frames to achieve subtlety of timing & movements.

8.Arcs – The visual path of action for natural movement.

9.Exaggeration – Accentuating the essence of an idea via the design & the action.

10.Secondary Action – The Action of an object resulting from another action.

11.Appeal – Creating a design or an action that the audience enjoys watching.

The Principles of Animation are fascinating because they are non-negotiable aesthetic standards. An animation that adheres to the Principles of Animation will be better than one that doesn’t. This is interesting because the subjective nature of aesthetics would seem to make a universal aesthetic standard impossible. There’s no accounting for taste, the saying goes. The Principles of Animation are applicable to all animation, though, because they pertain to aesthetic properties that are processed subconsciously. You can’t tell why you like an animation that has squash and stretch and why you dislike one that doesn’t. You just feel it.

The medium of videogames has a similar set of governing aesthetic principles. Identifying these underlying principles adds another tool to the designer’s toolbox, allowing a clearer understanding of virtual sensation and how to create it. This will save time in game production, reducing the iterative workload and allowing designers to focus on solving the interesting problems unique to their specific game. If we’re not wasting valuable production time reinventing the wheel, we have more time to create something unique and beautiful.

What is Virtual Sensation?

Driving a car, you have a very strong sense of the position of that car, the feel of steering and controlling it, of mastery. This is the ability that every person who’s ever learned to drive a car has: the ability to extend precise control over something outside your body. There is a great amount of pleasure in the learning and eventual mastery of such a motion translation. Very abstractly, when you remap your neural pathways, you are feeding your brain, and it rewards you with pleasure. So much so that people often seek out new and increasingly complicated mappings to master: sports, rock climbing, juggling, off-road mountain unicycling, and so on. Many people also find this pleasure in video games, where it is both distilled to its essence and free of the constraints and dangers of more physical activities. You can change the turning radius of a car, but you can’t change gravity. This experience of control is derived from an artificial kinesthesia. This is the “feel” of the game, the thing that makes your mom lean left and right in her seat as she tries to play Rad Racer. While accessories have evolved to enhance and support this virtual sensation – controller shake, for example – its essence has been the same since the creation of Spacewar and oscilloscope table tennis.

When describing the control of a game, players often use a physical analogy; the control is “floaty,” “twitchy,” “smooth,” “slow,” or “loose.” Accompanying these descriptors are very powerful “gut” reactions. Terms like love and hate are often invoked, with superlative emphasis. Best game ever, worst controls ever, worst camera ever. These are plainly aesthetic judgments, judgments that indicate some kind of inviolable rules are in play.

To define those rules, we need to delineate traditional cartoon animation, from Bugs Bunny “shorts” to full-length films like Snow White, from virtual sensation in video games. By definition, traditional animation plays beginning to end, linearly, as a series of images. By contrast, virtual sensation in games is driven primarily by the player’s input. If there’s no input, there’s no movement.

This delineation, between animation and virtual sensation, is a significant red herring in video games because there is a large amount of crossover from traditional animation. Many games layer linearly animated objects and characters on top of their reactive components.

For example, in the game Street Fighter 2 there are large, detailed sprites with many different animations, the playback of which are triggered by specific button presses or player input sequences. Underlying this system of “pose boxing,” though, is a very basic virtual sensation. The movement of the joystick maps directly to the character’s movement on screen, and that movement is extremely simple in nature. Imagine Street Fighter with simple grey boxes instead of detailed character animations (figure 1) to get a good sense of what is purely reactive and what is baked-on animation.

While the shape of the character changes depending on moves triggered by the player, the underlying motion is very simple.

The other thing to keep in mind is that while virtual sensation can provide a great foundation for a good game, it is separate from the concerns of other types of game design. Virtual sensation is not concerned, for example, with the tweaking of abstracted variables to achieve “balance” in a game. Virtual sensation occurs primarily at the lowest level of interaction, what you experience from moment to moment, representing a gut feel rather than a conscious experience. This is, perhaps, why it is extremely difficult for players to articulate why they like or dislike the feel of a game.

Seven Principles of Virtual Sensation

The seven principles of virtual sensation defined here will hopefully enable game designers – or indeed, anyone concerned with human-computer interaction – to improve the “gut feel” of the interface. They are a conscious attempt to improve the users’ unconscious experience.

1.Predictable Results – Allowing a sense of mastery and control by correctly interpreting player input and providing consistent, predictable results.

2.Subtlety and Freshness– There are small, subtle differences in reaction each time a specific input is triggered, making each interaction feel fresh and interesting.

3.Traction – Enabling mastery, control, and learning by rewarding player experimentation.

4.Low Skill Floor, High Skill Ceiling – Making the mechanic intuitive but deep; it takes minutes to pick up and understand but a lifetime to master.

5.Context – Giving a mechanic meaning by providing the rules and spatial context in which it operates

6.Impact and Satisfying Resolution – Defining the weight and size of objects through their interaction with each other and the environment.

7.Appealing Reaction – Producing appealing reaction regardless of context or input.

The application of these principles should transcend different “genres” and types of games, applying to 2d and 3d games alike. Anywhere there is virtual sensation these principles should help improve it.

The practical implications of these seven aesthetic principles are detailed below.

Note: for many of the points made here, there are interactive examples provided. These examples will be crucial to understanding each principle; I encourage playing them as they are referenced in the text. There are hyperlinked images that will spawn in-browser popup versions of the game throughout the paper. Alternately, go here for a master page with links to all the tests. The web version of these tests all require the Virtools web player plugin to run. The web player should ask for permission to install if it isn’t already on your system (installer is 740k). However, if you need more information on the Virtools web player– including compatibility and install issues–visit the Virtools Web Player download page. A downloadable .exe version of the tests can be found here.

1.Predictable Results – Allowing a sense of mastery and control by correctly interpreting player input and providing consistent, predictable results.

This is the cornerstone of virtual sensation. If rotating your car’s steering wheel clockwise switched from steering left to steering right at random, you would not be able to control it. Without the ability to predict the result of your input, there can be no feeling of control or mastery, no virtual sensation. Though this is very intuitive and easy to understand as a concept, many designers hamstring their virtual sensations by mapping inputs to results that are too difficult to process, creating mappings that are unnatural or counterintuitive, or by overwhelming the player with states and possibilities and thus making even consistent results seem random.

One pitfall in creating virtual sensation is relying on the infallibility of the platform on which it resides. It is easy to assume that because the game is technically infallible – it receives input accurately and processes it the same way each time – the input it receives and the results it responds with correspond in a meaningful way to what the player intended. As Will Wright observes, game design is half technology and half psychology. Even if the result of a given input is internally consistent as far as the game is concerned, if the movement is quick, snappy, or otherwise difficult for the player to perceive, it becomes unpredictable and uncontrollable. In the Cube Movement 1 test, touching the red dot using the normal controls is easy but doing so with the speedy controls is much more difficult and somewhat disorienting. The random controls variant makes it impossible to accurately predict the results of a given input and is therefore the most difficult and frustrating.

Cube Movement 1

Another design consideration that affects predictability is mapping. Mapping refers to the relationship between controls, their movements, and the results in the game. Based on input device and the game’s presentation, you have expectations about what will happen in response to a given action. A natural mapping exploits these expectations to create immediate understanding for the player. For example, using the normal controls in Cube Movement 1, each of the four buttons is a spatial analogy: pressing the top most button moves the cube up, the left button moves it left, and so on. Another way to achieve natural mapping is through cultural standards, which in games is often referred to as a “genre convention.” Using the keyboard keys W, A, S, and D to correspond to moving an object up, left, down, and right, respectively, is a well established cultural standard in videogames:

Unless you’re intentionally creating a counterintuitive feel to pursue some experiential goal, such as the reverse-controlled beetle golf minigame in Wario Ware, Inc: Mega Microgames or the clumsy, oppressive feel of Resident Evil 4, you should use spatial analogies and cultural standards wherever possible to create mappings that are easily learned and remembered.

Finally, avoid overwhelming the player with states. The “state” of a virtual sensation refers to a change in mapping that happens during gameplay. A simple example of this is jumping in Super Mario Brothers. When Mario is touching the ground, he can move left and right fairly quickly. In the air, his movement becomes much less responsive. The meaning of pressing left or right on the directional pad changes until he’s back on the ground again. State shifts are desirable because they tend to give rise to expressivity and improvisation and increase reaction sensitivity (described in principle four, “Low Skill Floor, High Skill Ceiling”.) As long as each state is easily discernable and switching between them is obvious, predictability is maintained. The down side is the possibility of overwhelming the player with too many states. This causes confusion, especially when switching between states is not obvious enough. Pressing a certain button no longer yields the same result so the result is no longer predictable and the feeling of control is lost. For example, inexperienced players trying to learn how to play Tony Hawk’s Underground become overwhelmed very quickly by the sheer number of possible states in the game. Especially if they’ve never played a Tony Hawk game before, players will fiddle around with the controls for less than a minute, quickly put the controller down, and say something like “I don’t like skateboarding games.” The large number of states in the game –grinding, manualling, airborne, running out, skitching, lip tricking and so on – made them feel like their inputs were random and unpredictable. In addition, the state switches are difficult to perceive and the skater moves at an extremely high speed, further alienating potential players by moving far more quickly than they can process. Unable to find any traction in the first few minutes, they give up.

Another problem with states is ambiguity. If there is no clear mechanism for showing the current state of the system, certain input patterns can yield a seemingly random result. If a player mashes on the buttons or accidentally presses a second button with their thumb, the result is effectively random to them. Again, the game will see those inputs in terms of milliseconds, knowing which one came first. To the player, however, the result seems inconsistent. To quote Mick West:

In [Super Mario 64] pressing [the A button] to jump then R1…triggers a ground pound. Pressing R1 before A triggers a backflip. Pressing them both at the same time causes either a ground pound, a backflip, or a normal jump, seemingly at random – the player has no control. The player can press these two buttons simultaneously over and over, and never figure out how to control each of these three actions properly.

In the Cube Movement 1 test, try pressing the 1, 2, and 3 buttons at the same time. You’ll get what is essentially a random result for what appears to be the same input.

Cube Movement 1

Predictability also means inference – from the first few minutes of a game, the player can extrapolate a strikingly clear picture of the structure of the entire game. This is a good thing, it gives the player traction in what is at first an alien and disorienting feeling of learning a new mechanic. In Super Mario Brothers, I know that if I fall into a hole, I will lose a life. It only takes one hole to figure that out; I’ll avoid holes for the rest of the game. This points to an important distinction: just because something is reproducible doesn’t mean it’s predictable. Just because doing action A will always produce result B doesn’t mean you can infer that if you want result D, you should do action C. A predictable result should reveal as much about the possibilities you haven’t tried as about the ones you have.

As game designers, we need to remember that we have very little time to hook the player. If they don’t feel successful and oriented within the first couple minutes, we’ve lost them. The lowest order feedback loop, the first thing they’ll encounter, is the virtual sensation, the moment-to-moment control. If it doesn’t feel good at an intuitive level, giving them predictable results they can sink their teeth into, they’ll stop playing. In this way, virtual sensation is the gatekeeper to all other game experiences.

2.Novelty – There are an infinite number of results from the same input.

While a virtual sensation must have a foundation of predictable results, it should also have novelty. That is, the same input (or what the player perceives to be the same input) should yield slightly different results to keep the player engaged, to avoid the fatigue of repetition, and to increase the overall appeal of the virtual sensation. While predictability and novelty would seem to be at odds, the two can coexist quite happily. When they do, you have the makings of a great virtual sensation.

One enemy of novelty is linear animation. Even in a game like Jak and Daxter: the Precursory Legacy where the linear animation is of uncommonly high quality and there are dozens of hand-animated variants for the animations, it’s very easy to tell that Jak is doing the same punch every time. The problem is that, once exhausted, even quality content gets boring. Watching Jak punch for the ten thousandth time is significantly less compelling than it was the first time. For a virtual sensation to hold the player’s interest, it needs to feel novel and interesting even after hours of play. Even repetitive actions should feel fresh each time you trigger them.

Many games attempt to solve this problem with mountains of additional content, running the player through a series of increasingly challenging and varied levels that give new and interesting context to the virtual sensation to keep it from feeling stale. Another approach is to introduce more mechanics – additions and modifications to virtual sensation – over the course of the game. For example, Castlevania: Dawn of Sorrow does a great job of constantly adding new virtual sensations through different “souls” and weapons, each of which adds a different feel to the underlying movement or augments it with new states (such as the ability to jump twice without landing.) Yet another approach is to use a deterministic global physics system, which keeps a virtual sensation feeling fresh by being accurate past what the player can perceive: the player will never be able to offer the same input twice. With deterministic physics, the same input will technically yield the same result. Games like Bridge Builder and Ski Stunt Simulator can accurately record a player’s input to the millisecond and feeding this input back into the system will always yield the same result. The feeling of novelty in these games exists, then, in the player, who can’t completely and accurately process the intricate subtleties of the simulation. While the player may be able to consistently achieve the same result in Ski Stunt Simulator – jumping a ravine then doing a backflip over a wooden hut, for example – no two runs will ever be the same. This is because while the parameters that govern the simulation will react identically each time, the player can’t perceive some of the most subtle differences. It’s more sensitive than the player’s perception, much like the real world. This is one of many ways perception affects virtual sensation. Because our perception is keenly tuned to physical reality, we subconsciously expect certain things to happen when objects interact and move. One thing we expect is that no motion will ever be exactly the same twice. This is the nature of reality: messy and imprecise. No-one person can punch exactly the same way twice, or throw a discus or javelin the same way twice. So if we see the same action happening in the same way over and over again without some subtle variation, it looks wrong.

Finally, a great way to keep a virtual sensation feeling novel and interesting is allowing improvisation and expression (covered in greater detail in the final section.) If the player feels they have enough different states available and if those states overlap in many different and interesting ways, playing the game can evolve to become a form of self expression.

3.Traction – Enabling mastery, control, and learning by rewarding player experimentation.

In many games, most of your time is spent failing. Especially in the first few minutes of play, a game is pure experimentation. The player is flailing around trying to find some success amongst the inevitable difficulty of learning a new mechanic, a new motion translation. These few minutes are all we can ask of players, who by sitting down to play the game are giving the designer the benefit of the doubt. It is crucial that we give them the tools they need to feel immediate success, to gain traction.

Traction is the moment of dawning comprehension just after the player feels their first success in the game. In that first moment of feeling oriented and safe, the structure of the game unfolds for them and they understand the challenges of the game. If they think their skill is a reasonable match for that challenge and that it seems interesting, they continue to play the game. If they never gain traction, they put the game down very quickly. Traction is all about giving the player good feedback; good feedback is immediate, clear, and useful.

Without immediate feedback, there can be no virtual sensation. If rotating the steering wheel of your car meant turning thirty seconds from now, there would be no control or mastery. It is in the immediacy of result that the feeling of control and the ability to translate motion lies. In a game it is the same immediacy, enabled by the real time processing power of the computer, which creates that virtual sensation. There is no virtual sensation in a turn-based game. For example, using the delayed controls in Cube Movement 1 it becomes clear that there is no virtual sensation if the feedback is not immediate.

Cube Movement 1

Again in the Cube Movement 1 test, press the 1, 2, and 3 buttons simultaneously. Note once again that the result of your action is random. Now press the Enter button. You should now see an arrow corresponding to the current state of the system. Pressing buttons 1, 2, and 3 simultaneously still yields a random result, but now the game is giving you clear feedback on the outcome of your action even if it’s not correctly interpreting your intent. A better approach would perhaps be to have all mashed (ambiguous) inputs default to the normal control setting, which is the easiest to control. The most important thing, though, is keeping the feedback clear: as long as you know what state you’re currently in, even the negative impact of an ambiguous result can be removed.

Finally, for feedback to be useful, it needs to accurately communicate the game state to the player. If you screwed up, did you understand why? Did the failure state give you some insight that will aid you to improve moving forward?

In Mario Kart DS, to succeed you must make use of “red sparks.” Red sparks occur after entering the “power slide” state, a state which alters the mapping of the steering input, allowing the player to reorient their kart without directly turning it. After entering a power slide, the responsiveness of the steering is greatly dampened, while overall turning radius is sharpened. This means that in order to negotiate a very sharp turn, one must go into a power slide, making the ability to effectively use the power slide the most important skill in the game. If you successfully enter the power slide state, smoke particles and a screeching tire sound effect are triggered, informing you of a state change. In addition, because of the reduced responsiveness in the steering controls, it is possible to very quickly tap the left and right buttons without significantly altering your trajectory. If you press the left and right buttons in rapid succession, a bright, blue, obvious “spark” particle appears, accompanied by a satisfying and remarkable noise: clear, immediate, and useful indications of another important state change. Repeat the quick left right sequence, and the sparks turn to red. At this point, exiting the power slide gives a speed boost, accompanied by a totally different particle effect and sound. If you don’t power slide into a sharp turn, you won’t complete the turn properly. If you don’t see smoke coming from the tires and hear the screeching noise, you know you haven’t entered the slide state properly. If you press left and right quickly while in the slide state and you don’t see sparks, you know you haven’t done it properly. If you release the power slide while the sparks are blue, you aren’t surprised if you don’t get the speed boost.

[Insert Mario Kart DS Video]

Mario Kart is an excellent example of accurately conveying many nuanced, subtle state changes to the player. Many games do not, and it greatly effects the player’s perception of how the game feels. Interestingly, a bad virtual sensation can be mastered, but does a very bad job of communicating the game state to the player. With enough determination, it’s possible to learn just about any virtual sensation, and many players pride themselves on being able to overcome such barriers. Perhaps the game has great context, or the multiplayer component is particularly compelling. Whatever the reason for wanting to learn a virtual sensation, whatever great experience can be had after mastery, there’s no excuse for erecting barriers against players by denying them important feedback about the results of their actions.

If you’re always giving the player good feedback, you tend to end up with consistent, measurable progress. One of the most appealing things about games is the sense of measurable progress, which is often formalized into points or level ups or some other numeric metric of skill progression. This provides a welcome change from everyday life, where there are very few formal metrics of progress.

4.Low Skill Floor, High Skill Ceiling – Making the mechanic intuitive but deep; it takes minutes to pick up and understand but a lifetime to master.

The best games are simple, can be learned in minutes, and take a lifetime to master. Being easy to learn, a game captures players of all skill levels. Being difficult to master, it can keep them playing their entire lives, driven always to improve. In games where it’s featured prominently, virtual sensation is a kind of microcosm for this phenomenon. It’s the first thing that players will have to master, and the basis for all their interactions from then on. The overall experience of playing the game is an amplification of the success of the virtual sensation on which it sits. Goals, levels, scenarios, items, weapons, and other changing contexts may highlight various aspects of the virtual sensation, but underlying it all is the same core, the core that existed at the beginning of the game, the same virtual sensation that had to provide traction in two minutes or less.

A low skill floor simply means that a virtual sensation is easy to learn. There are different dimensions here: a virtual sensation may be very complicated with many different inputs and states but still relatively easy to learn because it starts with a natural mapping, provides predictable results for input, and avoids state overwhelm with good feedback. Conversely, a very simple virtual sensation can quickly become confusing and difficult to learn if it lacks clear feedback or uses a mapping that is too arbitrary. The skill floor of a virtual sensation is not necessarily a function of its underlying complexity.

A high skill ceiling means that completely mastering the virtual sensation is extremely time consuming, if not impossible—for example, the original “Pong” game. The controls could not be simpler but how does one ‘master’ Pong? The only way to measure is through competition. In this sense, mastery exists as a sensation within the player’s mind. If they feel there is always some way for them to improve, and feel rewarded for doing so, they will continue to play. The player’s idea of mastery may be the ability to consistently beat their friends, to end a game of, say, Counterstrike with the highest score on the server, or to “beat” a game by completing all the tasks the designer has set – getting all 120 stars in Super Mario 64, for example. Players choose all kinds of goals to represent various levels of mastery, and game designers are good at supplying challenges to match and exceed those levels (or the means for players to create their own challenges.) Providing varied challenges is only half the story, though. The other thing that’s necessary for a long-lived game is a virtual sensation that has enough sensitivity to provide a large number of skill layers.

A skill layer is a chunk of skills and learning that must be mastered before the player can graduate to the next layer of skills and challenges. For example, in the game Ski Stunt Simulator, the first layer of skills is learning how to bend the skier forward and backwards, getting him to assume various positions and to shift his weight forwards and backwards on the skis. Next, you learn how to cause him to jump by quickly shifting between ducking and standing. Once you’ve “chunked” the various complicated motions necessary to jump into a single action and can reproduce it with ease, you learn to lean forward and back as you jump, tucking up to do a forward flip. Next, you learn how to back flip. Soon you’re doing multiple flips of both types, and so on. There are many, many skill layers in Ski Stunt Simulator, yielding a near-endless “replayability.”

One way to create a game that has a lot of skill layers is tuning the relationship between input and reaction sensitivity. In the Cube Movement 2 test, the normal controls have low input sensitivity and low reaction sensitivity. The input sensitivity is low because there are only four buttons, and each of which only has two states, on or off. The reaction sensitivity is low because the game’s reaction for each button has only two states, moving at full speed or not moving at all. This is not a very good virtual sensation, very stiff with very little fluidity or appeal. In some instances – the original Legend of Zelda, for example – this grid like rigidity is desirable because it allows for a more contemplative, less visceral feel. As in Pacman, all rotation and superfluous directions of movement have been stripped away for simplicity. The result, however, is not very a very compelling virtual sensation when removed from its context.

Cube Movement 2

Now try switching to the “Low Input, High Reaction” controls. The input sensitivity is still the same as it was – the four keyboard keys with only their on and off states – but now the mechanic feels much more fluid and organic, much better. This is because the reaction sensitivity is much higher. When a button is pressed, it’s no longer just starting and stopping movement, it’s ramping up to full speed gradually and taking a while to settle back down again once all input has stopped. There’s a lot more subtlety here, a lot more to master. It feels better, much more like the original Super Mario Brothers than the normal controls. The game is reacting to the simple button inputs with longer, more fluid states that have a bit of play in them. Because each state now takes a while to resolve (forward movement will slow but not totally stop before a sideways motion is started), it has a lot of interesting state overlaps that give the player a great sense of momentum:

Now try the “High Input, Low Reaction” controls, which are driven instead by the mouse. With this combination, you have very high sensitivity with the input device, the mouse, but almost zero reaction from the game. The cube has become essentially a very large cursor. This is a very natural mapping; the position of the mouse on the screen matches the position of the mouse sitting on the desk, so it’s very easy to feel oriented and get a sense of mastery and control. In addition, using a mouse like this is one of the most fundamental skills of computer use, so we’ve got a global cultural standard to help make this mapping feel natural. Pretty boring, though, isn’t it? Because the mapping is so internalized from years of computer use, there’s nothing to learn, no motion translation to master. There’s very little virtual sensation to mouse movement; it is quick, snappy, and has almost no feel of mass, weight or presence.

The “High Input, High Reaction” controls, on the other hand, have some play to them. There’s a very interesting motion here, one that requires a bit of mastery. It feels nice to whip the block around again and again to hit the red dot and to experiment with trying to slow the block down again and reverse direction or to make little figure eight patterns. Even a game with high input sensitivity and low reaction sensitivity (a first person shooter that ties mouse movement directly to looking around a 3d space for example), smoothes that snappy, jerky input with a little bit of reaction from the game.

Another way to get a lot of sensitivity in virtual sensation is rapid state switching. As noted before, players are conditioned to tolerate changes in mappings during gameplay. As long as there’s good feedback telling the player that a state switch has occurred, it’s ok to change the meaning of the controls on the fly. The benefit of doing this is that you’re increasing the reaction sensitivity a great deal as you do so (though in a much less obvious way than having the object speed up and slow down gradually instead of turning movement on and off discretely.) In Super Mario Brothers, there are ostensibly three controls: left, right, and jump. On closer examination, though, we see a bunch of different states that overlap and interact in different ways to create a nice, sensitive feel:

When Mario is not in contact with the ground, the strength of his left and right movement is greatly reduced: a different state. This is a very simple example of increasing sensitivity through state switching. Left and right movement means something different when Mario is in the air, meaning that one input is actually mapped to two separate actions that change depending on context. While the input sensitivity is the same, the player now has two entirely different sets of actions in the game and, in fact, two slightly different virtual sensations that are interwoven to create a whole that is greater than the sum of its parts.

Another type of state switching is “chording,” creating new reaction types by having certain inputs act as modifiers for others. When Mario is in contact with the ground and the B button is held, he moves much more quickly, a modifier that causes him to enter another state: running. This would seem only to affect Mario when in the ground state but it has also been tied into the air state: the speed you’re moving when you jump effects how high you can jump. By remapping the same inputs on the fly to create different results, you’re actually creating a much greater possibility space for the player, one that has many, many more skill layers than a static mapping.

Finally, note that it’s possible to trigger state switches across both time and space. The most common example of state switching across time in games is a combo: certain sequences of button presses have different meanings if they’re pressed within a certain time of one another. This form of state switching relies not on deeper reaction to certain combinations of simultaneous input, but to sequences of input across time. The result is the same, though: greater sensitivity and more skill layers. The same thing can be accomplished spatially, giving different inputs new meanings depending on their spatial relationships. The ultimate example of this is the game Strange Attractors, a game with only one button for input. Pressing the button turns the “the attractors” on or off – they’re gravity wells of sorts – pulling the player’s ship towards them or pushing the ship away. How much pull each gravity well exerts is affected by how close the ship is to it. This simple system, through huge reaction sensitivity, makes even a single button a conduit for a strong virtual sensation.

5.Context – Giving a mechanic meaning by providing the rules and spatial context in which it operates

In Super Mario 64, there are many possible moves. Using chording and state switching, it’s possible to have Mario triple jump, long jump, wall jump, or do a high back flip (among other things.) Now imagine Mario standing in a field of blank whiteness, with no objects around him. If Mario has nothing to interact with, the fact that he has these acrobatic abilities is meaningless. Without a wall, there can be no wall jump. This illustrates the important role context plays in creating virtual sensation: providing meaning to the motion. Context affects virtual sensation in three major ways: through spacing, perception, and improvisation.

Spacing refers to the distance between objects in the game’s environment. For every virtual sensation, there is a range of spatial contexts that provides the best feel. For example, racing games typically have obstacles in the road to avoid. How many objects there are and how far apart they are spaced has a huge effect on virtual sensation. If obstacles are so far apart that the player rarely encounters them, the player has nothing against which to measure their skills and no way to feel out the virtual sensation. Like Mario in a field of blankness, the turning radius of a car in a racing game needs context to have meaning. In the Cube Movement 3 demo, the controls have been tuned similarly to a racing game; the S and F buttons now correspond to rotation instead of moving left and right. Select the “Context Empty” option and follow the course. There’s not much to interact with, not much to steer around, so virtual sensation is mostly absent.

Cube Movement 3

On the other hand, if the objects are spaced in such a way they are constantly assailing the player, the player will become overwhelmed and frustrated, feeling as though they are unable to control what is happening. An example of this can be found using the ‘context full’ controls in the Cube Movement 3 demo. There are too many objects too close together to effectively steer around, and the result is frustration. Comparing forward speed to the amount and spacing of objects, we can get a rough ratio of objects encountered per second. If this ratio is too high, the player will feel overwhelmed and out of control. If it’s too low, they’ll be bored.

Another thing that affects this bored/overwhelmed tradeoff is perception. If the player can see something in the road five or ten seconds ahead of when they need to steer around it, steering is easy. If the object is outside their perception until they have only milliseconds to react, steering is extremely difficult. If the camera is zoomed in or angled such that the player can’t see what’s in the road ahead, the spacing of objects becomes somewhat irrelevant: even if there are very few obstacles in the road, the player will still be unable to effectively steer around them if they can’t see far enough ahead to react in time. Returning to Cube Movement 3, using the ‘zoomed’ controls makes it extremely difficult to steer because there is so little time to react after an object appears. Notice, though, that the impression of speed is increased; by moving the camera closer to the objects, they appear to move more quickly. By contrast, using the “Angled” controls makes steering around the obstacles is extremely easy; however, at this angle the impression of speed is very slight. In this way, virtual sensation is a function of player perception as much as it is of reaction or input sensitivity.

Player perception also colors virtual sensation with expectations drawn from experiences with physical reality and other media, such as film. To return to the example of a racing game, speed is relative. The main way a player judges the speed at which their car is moving is by observing how quickly static objects seem to move in relation. If, as in the game Burnout: Revenge, objects fly by extremely quickly, the impression is of very high speed. If the objects seem to move past slowly, or if there aren’t enough objects in the environment, the impression of speed is lost. In the Cube Movement 3 demo, the actual speed of movement is identical in the “Slow” and “Fast” options. In the “Fast” version, however, the small tiled texture on the ground provides a frame of reference for movement, creating a much greater impression of speed. This impression of speed exists in all virtual sensations, from Mario to Burnout, and relies primarily on spatial context and scale. If the objects moving past are very large compared to the object being controlled, like the ground in the “Slow” version of Cube Movement 3, the impression will be that the object is moving very slowly. Obviously, altering that scale relationship, as in the “Fast” version, makes the motion seem much faster.

Player perception is also heavily affected by representation. In Shadow of the Colossus, the colossi move very slowly, cause “camera shake” with their footfalls, and kick up huge amounts of dust and dirt particles as they move. The impression that they are massive, hulking beings of solid stone is extremely compelling and convincing. If they moved more quickly, or did not have those additional effects to represent other aspects of their size, this impression would be shattered. This is because the relative scale of objects in a game creates expectations in the player about how these objects should behave. A massive object needs to behave like a massive object, a tiny one like a tiny one. From the most massive boulder to the tiniest kernel of dust, if it’s going to move it needs to move appropriately.

Shadow of the Colossus

Context is important in just about every game. Even in a simple game like the original Tetris, Alexei Pajitnov had to decide that the playing field would be ten blocks wide by twenty blocks high. If the grid were instead three wide by twenty high, Tetris would be a much different game:

6.Impact and Satisfying Resolution – Defining the weight and size of objects through their interaction with each other and the environment.

A good virtual sensation creates a powerful sense of weight and mass in the player’s mind. After observing the interaction of objects in the game for a very short time, the player extrapolates an entire universe worth of physical laws, from the relative weight and mass of every object in the environment to the way that light works in this world. This extrapolation becomes a kind of self-referent “sense” unique to the world of the game – the player, without having to actually test out every possibility, has a very clear idea what the result will be for any given action. This sense, the subconscious understanding of the underlying laws that govern the interaction of all objects in the game world, is a huge part of virtual sensation. It’s useful in learning because it means that any object can be relied on to act in a certain way (predictable results) and it’s another form of good feedback: it helps the player make good, educated guesses about the results of a certain action. Also, when objects don’t interact properly, it breaks immersion, what Csikszentmihalyi calls “flow.” When an object clips through another object, or if you shoot something and it disappears without reacting, you think “oh wait, I’m playing a game and they programmed it wrong” and flow is broken. Snap the player out of flow too many times, and they put the game down. Another foundation for good virtual sensation, then, is a world where all object interaction is clear, consistent, and feels satisfying.

Much of our knowledge about the way physical reality works comes from watching objects interact. If you knock over a stack of books, they tumble to the floor in a certain way. Throw a tennis ball against a wall and you get a different but equally complicated result. A huge number of small, subconsciously processed variables affect how much that tennis ball will rebound, where it will go, and how long it will bounce or roll before it stops. The fact that we can play a reasonable game of tennis speaks volumes about the human ability to observe, process, and adapt to the dynamics of the physical world in real time. As Chris Crawford is fond of noting, there aren’t any animals that can shoot hoops. This is what makes satisfying resolution of game interactions difficult to achieve: humans are sharply and subconsciously tuned to the way things are supposed to work at a very cognitive level.

One way around this phenomenon is to simplify your representation. If a character looks photorealistic, it is perfectly reasonable for a player to expect that their interactions with objects in their environment will perfectly mimic reality. If a character is stylized or simplified, it will not defy the player’s expectations if their interactions are also simplified. In many games, there seems to be a constant battle between the representation of an object and the virtual sensation underlying it. Things look ever more realistic, which creates an ever-widening gulf between player expectation and game reality when things continue to clip through one another or otherwise fail to interact properly. Object interaction shouldn’t be a hindrance or constraint on representation. Rather, it is a powerful tool for creating compelling virtual sensation.

To use object interaction to effectively convey information to the player about the relative weights and masses of objects and the nature of their interactions, remember one thing: you’re faking it. The goal is only to create the perception of weight, mass, and force in the player’s mind. This is different from the way things “really are” according to physicists, or accuracy in a simulation. This is more like Aristotle’s naive physics, a theory of physics that, while quaint and amusing to today’s physicists, corresponds much better to everyday, physical observations than later theories. Stuff just has to seem right, which makes it easier to fake.

The way to effectively fake object interactions is by looking at how people perceive things. It’s a well-known fact that exaggeration in an animation can make it more convincing to the audience. Squashing and stretching an object in ways that when viewed as individual frames seem bizarre and unnatural makes them read much better when animated:

Likewise, by exaggerating interactions between objects in a game, we convey physical properties more effectively. For example, in Mario Kart DS the karts scale up and down in a bouncy way when they bump into walls and other karts, but only if the impact happened above a certain speed. In addition, if a big kart hits a little kart it doesn’t just nudge it, it sends that sucker flying with a faked, amplified force. In Cube Movement 4, the interaction between objects in the ‘plain’ setting is very basic. Switching to ‘scale’ adds a faked scaling effect to the interaction, as well as an exaggerating force. Notice how much more satisfying the interactions feel. The best virtual sensations exaggerate the interactions between objects in this way, taking care not only to emphasize the different interactions but to convey only what’s important about them to the player.

Another aid in conveying object interactions is particles. Mario Kart DS further emphasizes interactions between karts by triggering particles at the point and moment of impact. If a kart crashes into an obstacle, particles shoot out violently and the kart is sent flying in an exaggerated flip. Switching to the “No Pop” option, try running into the various objects in Cube Movement 4. Now switch to the “Pop” option. A simple spray of arbitrarily star-shaped particles is the difference between a very satisfying interaction and one that seems totally wrong. Even if a spray of dust or particles where objects touch would be surprising if they happened in the real world (the constant sparking of metal on metal in Soul Calibur, for example), players interpret them correctly. As long as something happens when objects interact, and that something seems to be appropriate for the speed, mass, and weight of the objects, the feeling of impact is conveyed.

What’s “appropriate” may require some experimentation to get right. For example, the “pop small” option in Cube Movement 4, in which the same star particles now have very little speed to their explosion, feels much less satisfying. By contrast, the “pop large” option feels like too great a reaction for the forces at work. It almost looks as though the swinging circular piece is a blade spinning at great speed, with showers of sparks shooting off as it touches pieces of metal.

Finally, it’s important to look for best practices from film when emphasizing object interactions. Camera tricks, especially, are great to draw from. The classic example is having the camera shake when a huge impact or explosion occurs. World War 2 themed games seem to have pushed this direction the farthest, emulating shell-shock down to the ringing noise and blurred vision, taking cues from films like Saving Private Ryan and The Thin Red Line. It’s possible to use effects like camera shake in just about any context, though, as illustrated by its frequent use to emphasize impact in platformers and fighting games.

7.Appealing Reaction – Producing appealing reaction regardless of context or input.

When completely removed from its context, virtual sensation should still be appealing, interesting, and compelling. What’s important here is to separate meaning from appeal. Context is very important to create meaning in a virtual sensation, as well as to provide a point of reference for scale, speed, and weight, but is separate from naked appeal. A virtual sensation has appeal when it’s fun to play and tinker with in a completely empty space. Going back to the Cube Movement 2 test, all the mechanics are fairly naked (the only context being a red dot that can’t be collided with) but the “High Input, High Reaction’ test has much more appeal. This is because its motion is more complicated, fluid, and organic-looking than the other three. Things like Shalin Shodhan’s On a Rainy Day and Kyle Gabler’s Big Vine, Attack of the Killer Swarm, and Gravity Head are all great examples of virtual sensations with fantastic, organic appeal.

Cube Movement 2

Additional effects and baked on animation can also add to appeal. The animations in Jak and Daxter add a lot of appeal to what is otherwise a fairly bland underlying virtual sensation. Most of the techniques used to animate Jak come from traditional animation –squash and stretch and so on – but it’s interesting to note the effect the animations have on perception of virtual sensation. Jak’s movement, while very simple when divorced from the layer of animation on top of it, seems organic, complex, and appealing. In New Super Mario Brothers there is a similar effect: if Mario were just a cube the virtual sensation would not be as appealing. As it is, Mario’s run cycle speeds up gradually and slows down again as he starts and stops, throwing up dust particles both as he runs and if he quickly changes directions. The result is much more compelling than the (already excellent) naked virtual sensation.

The other part of appeal is making sure that no matter what input the player gives the system, the result is compelling. This is especially important for things like crashes and failure states. An enlightened approach is to spend more time on the failure states, making them varied and interesting, since this is where the player will spend most of their time. For example, in Ski Stunt Simulator, it’s fun to crash and mangle the skier. Because the skier is a “rag doll” physics rig, complete with constraints to simulate joints and different, individual masses for each limb, crashing him produces a satisfying, organic-looking result. It’s not just one canned animation playing back every time. He’ll smack his head, tumble down a ravine, or impale himself on a cliff side. In a sort of extreme sports mishap kind of way, it’s very appealing to watch him crash and go limp as his body contorts and tumbles. There’s a very visceral “oooh daaaamn!” kind of reaction, one that has a hugely positive effect both on learning and capture. Because the failure state is so much fun, learning is much easier and frustration mitigated. If you try a run numerous times and still aren’t successful, you can always crash the skier intentionally a few times to put a smile on your face. Likewise, observers will often be “captured” by Ski Stunt Simulator’s organic look, especially when the skier crashes, enticing them to play.

The Importance of Ownership

The underlying goal of all the principles discussed above is to create a feeling of control and mastery so powerful that it transcends context and platform and becomes a powerful tool for self expression. This feeling creates a strong sense of ownership, which is what happens when players can express themselves in a meaningful way through a game. Any artifacts the game creates based on their inputs (replays and so forth) become an important commodity, and players begin to identify with the game and their achievements in it in a very powerful, transparent way. Players start to feel pride in their accomplishments, and develop a desire to share them with others. The ultimate example is The Sims franchise, which has sold millions of copies based on the feelings of ownership players have over their digital creations and stories.

The best virtual sensations contribute significantly to the feeling of ownership. This happens after the player has fully learned the mechanic and mastered most of the challenges presented by the game, at the point most games get put down. In the game industry, this is often termed “replayability” and is spoken of in hushed tones because of the obvious correlation between games that have this quality and games that do very well financially. Really, this phenomenon is all about ownership: if a player feels a personal investment in a game, they’ll keep playing it. If they keep playing it, they will start to evangelize it. Once mastered, a virtual sensation that has enough sensitivity allows improvisation, which often gives rise to unique forms of self-expression.

Improvisation in a game is the ability to create new and interesting combinations of motion in real time, adapting and reacting to the game’s environment in a fluid, organic way, without forethought. This is an intensely pleasurable experience, a flow experience. When your skill is matching up well to the challenge you’ve undertaken, you get into the flow state, which is universally described as being a wonderful, life-enriching experience. To allow such improvisation, a mechanic needs to have not only a lot of sensitivity (between its input and reaction) but to be very flexible in how it interacts with objects in its environment.

Some games, like Tony Hawk’s Underground, achieve a sense of ownership through a huge number of states and a context that’s well spaced with a lot of utility in a ton of different instances. The player can use any number of states to traverse the environment, using each object in many different ways. All the objects are well spaced relative to one another, which again fosters improvisation by making it easy to transfer successfully between any two objects from any direction of approach. Invariably, no two combos will be the same because you’ll use different objects in different ways, and choose different paths to take depending on the situation. You improvise, making snap judgments about which objects to traverse. At the highest level of play, this becomes even more expressive, with players finding and practicing long “lines” of chained moves used on certain objects. They seek out aesthetically appealing states rather than high scoring ones, recording videos of their most appealing lines and uploading them to the web to share. To these players, Tony Hawk is a form of interpretive dance, enabled by the fact that all the objects have a very high degree of utility from just about any state or relative position.

Other games, like Ski Stunt Simulator, are more fluid and achieve ownership through extremely high input sensitivity and subtlety. Minute differences in the angle of skis to ground, for example, produce a totally different kind of landing. Because there are global rules about object interaction in Ski Stunt –a crash occurs if the skis hit at a certain angle or when the skier’s head hits the ground – there’s a lot of space for interesting improvisation and expression. For example, when the skier is extended, standing his full height, he raises his arms in the air. If you’re in the air, about to hit your head and trigger a crash state, you can extend the skiers arms to prevent his head from hitting. This ability isn’t explicitly defined but, instead, is a product of the recombination of a few simple rules (e.g., you can move the skiers arms up, crash is only triggered when his head hits.)

Finally, when multiple players are involved, expression becomes communication, which opens a whole new realm of powerful social experiences. In Battlefield 2, for example? if you sneak up on someone and stab them with a knife, their state goes from alive to dead. In that context, knifing an enemy player is just playing the game, and slightly embarrassing the enemy player who allowed himself to be snuck up on. If once the player is dead, however, you continue to knife the corpse, this action has a totally different meaning. It’s directly insulting and belittling to the player, who has to watch from his corpses’ perspective as he’s stabbed over and over again until he can respawn.

Here’s a personal example: I once got very lucky sneaking up on someone who was clearly an experienced player and very difficult to sneak up on. I climbed up a ladder behind him just as he turned away. As soon as I reached the top of the ladder, I pulled out my knife in preparation for an easy, embarrassing kill. To my surprise, he immediately turned around again, sweeping for enemies behind him. I had just enough time to stab before I was gunned down. His avatar jerked wildly, translating the actual jerking motion of his mouse hand swinging wildly in real surprise and alarm across the internet and back to me.

Conclusion

The goal of any game is to provide entertaining, life-enriching flow and social experiences, experiences that don’t exist watching a film or reading a book. Compelling virtual sensation is a great foundation for these experiences, providing feelings of challenge, mastery, and control as well as a beautiful kinesthetic experience unique to any medium. The game designer, then needs an understanding of what gives rise to these experiences and the tools and skills to create them. I hope these principles of virtual sensation can be such a tool.

Throughout the paper I referenced Csikszentmihalyi’s “flow” theory where appropriate. In games, this is often referred to as “immersion”; so, for the purposes of this paper, consider those terms interchangeable. For a detailed description of the flow state, how you can tell if someone is entering or exiting it, how it enriches people’s lives, and the conditions necessary to achieve it, reference Csikszentmihalyi’s original work on flow, Beyond Boredom and Anxiety. For more information about how flow applies directly to games, reference Sweetser and Wyeth’s Gameflow: A model for evaluating player enjoyment in games.

Posted in Thesis | Leave a comment

FMP 11 Great Sword Animation Part 2 – Polish

blocking plus

During this week, I continued to optimize the animation of great sword. The first thing that needs to be done is to correct some obvious errors.

Jitter
Arm error

The following is the revised animation…

swing
swing 2
arm
feet before
feet after
chest before
chest after
final version
Posted in FMP | Leave a comment

FMP 10 Great Sword Animation Part 1 – Blocking

ASSETS

In the previous several productions, I have always used lightweight weapons (such as daggers, shurikens, and claws) as the main elements in the animation. In this week’s mission, I decided to try to create a combo animation of heavy weapons. The most important thing for a heavy weapon is its sense of weight, so the character’s preparatory actions will be more obvious.

In the selection of models, I used hunter, a model from Monster Hunter.

REFERENCE

This is the reference animation of the heavy weapon I found in the wild rift.

However, it is not enough for me to have excellent cases as a reference, so I also found some tutorials with detailed steps.

excellent tutorial by 3dsMax KeyPlayer

PROCESS

Blocking part 1
BLOCKING PART 2
blocking plus

After finishing the blocking phase, many places are not properly connected. Many joints have jitters and errors, which need to be corrected in the subsequent spline and polish stages.

Posted in FMP | Leave a comment

FMP 9 Death Animation

In games, death animation is a very common form of animation. In this week’s study, I tried to make a death animation, and the model used is still Link.

REFERENCE

This is an animation tutorial I found on youtube, I plan to use this as my reference.

PROCESS

key poses
This is the final rendered version

In this version, the animation of the cloak needs further improvement.

dark mode without cloak
Posted in FMP | Leave a comment