Physics In Character Animation

  • November 2019
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Physics In Character Animation as PDF for free.

More details

  • Words: 4,494
  • Pages: 10
50

July 2000/Vol. 43, No. 7 COMMUNICATIONS OF THE ACM

a

a

To solve the problem of generating realistic human motion, animation software has to not only produce realistic movement but provide full control of the process to the animator.

Controlling Physics in

Realistic Character Animation Zoran Popovi´c For the last 10 years, computers have been highly involved process adds further difficulties. This IMAGES FROM "MIRA AND THE WIND" (ANIMATED BY ANTHONY LAMPA AND LIT BY PETER SUMANESENI AND DIRECTED BY BARBARA MONES, ANIMATION RESEARCH LABS, DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING, UNIVERSITY OF WASHINGTON)

used with great success to produce realistic motion of passive structures by simulating the physical laws of motion—something that would be very difficult to do by hand. Examples include the simulation of colliding rigid bodies and of cloth motion. It would seem that creating realistic character animation would not be significantly more difficult than computing the motion of cloth or other such passive objects. As with passive-object simulations, a character’s motion needs to be consistent with the laws of physics for an animation to appear realistic. But consistency alone is not sufficient for generating realisticlooking animations. The character also produces forces that create locomotion. Humans utilize their muscles in many ways in order to walk or run, and only a small set of simulated animations would look realistic (see Figure 1). To produce natural-looking motion, it’s not enough to have just physically correct motion; the intricacies of the character’s muscles and bones and how they pertain to motion have to be taken into account. Providing the animator with the ability to control this a

dual goal of controlled realism motivated me (and my advisor Andy Witkin, now at Pixar Animation Studios) to devise the methodology described here.

Reusable Character Motion Libraries The prevalent use of keyframing and procedural methods in computer animation stems from the fact that these methods put full control of the resulting motion in the hands and imagination of the animator. The burden of animation quality rests entirely on the animator, much as puppeteers have full control over the movement of their marionettes by pulling on specific strings. Highly skilled animators and special-effects wizards appreciate this low-level control, because it allows them to fully express their artistry. Having so much control also makes it that much more difficult for the world’s less-talented animators to create animations. In fact, the task of appropriately positioning a character in a specific pose at the right time is arduous, even for the simplest animation. Instead of incrementally setting keyframes for various a COMMUNICATIONS OF THE ACM July 2000/Vol. 43, No. 7

51

Figure 1. Automatically generated human jump (rendered by Peter Sumanaseni, University of Washington).

character poses, the unskilled animator would ideally like to be able to edit high-level motion constructs. For example, an animator might want to reposition footprints or simply specify that a movement should be more energetic. Alternatively, the animator might want to impose greater importance on balance while performing a movement, or change a character’s behavior by specifying the walking surface be significantly more slippery. The animator should also be able to access a human-run library, instantiating a specific run by demanding realism and specifying the character dimensions, foot placements, even the emotional state of the runner. By limiting, say, the left knee’s range of motion, the library would produce a limping run satisfying all previous requirements. The animator should be able to specify the finer detail constraints on bounce quality, air-time, and specific arm poses (see the first sidebar). Finally, it should be possible to merge a collection of instantiated motion sequences, including, say, human run, human jump, karate kicks, soccer ball kick, and tennis serve, from 52

July 2000/Vol. 43, No. 7 COMMUNICATIONS OF THE ACM

the library to produce a seamless character animation. The use of such human-movement libraries would make computer animation a much more accessible storytelling medium useful to a much more diverse population of computer-content providers. One of the most difficult problems in light of such extremely flexible libraries is how to maintain the realism of the motion despite all the possible changes in the motion specification. Although not always needed, the realism requirement would enable even unskilled animators to create the motion of synthetic humans—arguably one of the greatest challenges in computer animation. Realistic synthetic humans would be very useful in a number of areas: Education. Using desktop PCs, children could learn from personal instructors that seem as real as the teachers in their classrooms. Entertainment. In digital filmmaking and videogame design, creating realistic human characters

Human Run and Jump Libraries

Figure. Frames from the crossed footsteps, limp, and wide footsteps run and from the diagonal, obstacle, unbalanced, and twist jump.

A

s a proof of concept, I generated two libraries: human run

that the running character appears to be speed walking.

and human jump. The figure shows the original motion-cap-

Short-legged limp run. To test the limits of character-model

ture sequence of a human run from which the motion library was

modification, I shortened the shin of the right leg and fixed the left

constructed, as well as a selection of markedly different automati-

knee DOF, as in the limp run. The output running sequence has an

cally generated animations and a range of jumps generated from a

extreme limp, with the leg in the cast swinging more to the side

single human-jump-motion library. For input sequences, I used high-

due to the shorter right leg.

detail (120Hz) motion-capture data. During the fitting stage, I used

I created the broad-jump motion library to explore how far a

drastically different simplification approaches for the libraries to

character could be simplified without losing the dynamic essence of

show the versatility of the physics-based framework.

the jump. In order to demonstrate the power and flexibility of

All of these motion sequences would be difficult to create using

these simplification tools, I used a drastically different simplification

existing motion-editing tools. While a number of constraints could

approach from the one used in the human-run motions. I reduced

conceivably be introduced to enhance realism, for some sequences

the upper body structure to a single mass point that moves with

they would require an overwhelming amount of work, on par with

three prismatic muscles pushing off from the rest of the body.

creating a realistic motion sequence from scratch through keyfram-

Since the legs move in unison during a broad jump, I also turned

ing. In contrast, my approach requires a minimal number of intu-

them into a single leg. I turned the knee hinge joint into a prismatic

itive changes for each transformed sequence, including the

joint, showing that even with this completely changed character

following human run motions:

model, the dynamic properties of the broad jump are preserved.

Crossed footsteps. To force the character to twist at each step,

The simplified character (“hopper”), which lacks angular joints and

crisscrossing its feet, I moved footprints to the opposite side of the

has 10 DOFs, six of which are the global position and orientation

body.

of the model, can perform the following jumps:

Limp run. To create the appearance of a leg in a cast, I removed

Diagonal. I displaced the landing position to the side and constrained

the left knee DOF. In the final motion sequence, the character

the torso orientation to point straight ahead. This change realigned

leans to the side and swings its right leg in a more dramatic fash-

the push-off and anticipation stages in the direction of the jump.

ion, creating a realistic (albeit painful) limp run.

Obstacle. To force the legs to be raised during the flight stage, I

Wide footsteps. To force the character to leap significantly far-

raised the landing position and introduced a hurdle. As a result, the

ther to the side at each step, I repositioned the footprint mechani-

character’s push-off is more vertical, and the legs tuck in during

cal constraints to be wider apart. Since each step then covers

flight.

more distance, the overall resulting motion has leaps of smaller

Unbalanced. I removed the final pose constraint that imposed

height, as in the figure. The appropriate change in the upper body

the upright position. In the resulting sequence, the character never

orientation is apparent in the resulting motion.

uses its muscles to stand up upon landing, since doing so requires

Moon run. To create a human-run sequence under lunar gravity, I

extra energy. Instead of straightening up, the character tumbles

reduced the gravity constant to the gravity on the moon’s surface.

forward, giving the appearance of poor landing balance.

The resulting run is slower and has much higher leaps appropriate

Twist. To mandate a 90-degree turn, I introduced the torso-ori-

for the low-gravity environment.

entation pose constraint at the end of the animation. The output

Neptune run. To see how the running sequence would adjust to

motion shows the change in the anticipation and introduction of

an extreme gravitational field, I increased the Earth’s gravity ten-

the body twist during flight. The landing also changes to accommo-

fold. The resulting flight phase of the run is so low to the ground

date the sideways landing position.

c

COMMUNICATIONS OF THE ACM July 2000/Vol. 43, No. 7

53

The novel approach to the problem of creating animations I describe here maintains a level of realism in light of all other motion modifications. Instead of motion synthesis, I approach such animations through “motion transformation,” or the adaptation of existing motion. For example, instead of generating an animation from scratch, I transform existing human-run sequences by changing their parameters until the resulting motion meets the needs of the animation. A number of other computer-animation researchers have also adopted the motion-transformation approach, which has arguably become the Figure 3. Kinematic character simplification: most active research direction in computer (a) elbows and spine abstracted away; (b) upper body animation [2, 3, 6, 12]. reduced to the center of mass; and (c) symmetric Any dynamically plausible motion, such movement abstraction. as captured motion and physical simulations, can be used as input to my transformation algorithm. The first step in the algorithm is construction of a simplified character model and the fitting of the motion of the simplified model to the captured motion data. This fitted motion is a physical spacetime-optimization solution is perhaps the greatest open challenge. More including the body’s mass properties, pose, footsophisticated models of human motion and print constraints, and muscles, as well as the appearance are needed. motion property being optimized, called the Human-computer interaction. The face any com“objective function” [11] (see the second sidebar). puter shows its user is impersonal. Realistic fullTo edit an animation, the animator modifies the bodied human “guides” would make computers constraints and physical parameters of the model more accessible to a wider population. and other spacetime-optimization parameters, Teleconferencing. By combining video with more including limb geometry, footprint positions, sophisticated shape and motion information, tele- objective function, and gravity. From this altered conferencing would obviate the need for using spacetime “parameterization,” the algorithm comsingle-viewpoint video, allowing participants putes a transformed motion sequence and maps the more freedom of movement and a greater sense motion change of the simplified model back onto of presence. Imagine, in the next two to five the original motion to produce the final animation years, realistic synthetic avatars representing us in sequence. the distributed digital world, changing the notion In addition to providing a methodology for ensurof teleconferencing as we’ve come to know it. ing realism of such motion, the spacetime-optimization model is an intuitive tool for the high-level Although people are generally quick at visually per- editing of motion sequences, including: foot placeceiving the subtleties of human motion in nature, our ment and timing; the kinematic structure of the charperceptual understanding of natural movement offers acter; the dynamic environment of the animation; little help to an animator trying to generate such and the motion property being optimized by the animotion. Moreover, synthesizing and analyzing the mation task. The algorithm’s ability to preserve high-quality motion of dynamic 3D articulated char- dynamics of motion and the existence of a rich set of acters has proven to be an extremely difficult problem. motion controls enables animators to create motion The collective knowledge of biomechanics, control libraries from a single input motion sequence. Once theory, robot-motion planning, and computer anima- the original motion is fitted onto the spacetime-optition indicates that the underlying processes governing mization model, the model can then be presented to motion are complex and difficult to control. the animator as a tool for generating the movement Figure 2. Outline of the motion-transformation process.

54

July 2000/Vol. 43, No. 7 COMMUNICATIONS OF THE ACM

that meets the specifications of the given animation they’re working on.

developer, the actuated character is placed in the dynamic simulation environment to produce the final motion sequence. Spacetime optimization does not use controllers or perform simulations. An animation problem is phrased as a large variational optimization, whose solution is the input motion-capture sequence. Unfortunately, spacetime-optimization methods have not been shown to be feasible for computing human motion over long periods of time because of the nonlinearity and parameter-space explosion. For this reason, my approach first simplifies the character model. The entire transformation process (see Figure 2) involves four main stages:

Transforming the Motion Sequence As in other motion-capture editing methods, my algorithm (presented in 1999) does not synthesize motion from scratch. Instead, it transforms the input motion sequence to satisfy the needs of the animation. Although its development was motivated by the general need to enable realistic high-level control of highquality captured-motion sequences, the same methods can be applied to the motion of arbitrary sources of a realistic motion [8, 9]. At its core, the algorithm uses the spacetime-optimization formulation, which maintains the dynamic integrity of motion and provides intuitive motion Character simplification. The tool developer creates control. Before these results were available, dynamic an abstract character model containing the minispacetime-optimization methods were used exclumal number of degrees of freedom (DOFs) necessively for motion synthesis, rather than for motion sary to capture the essence of the input motion transformation [7, 10]. while mapping input motion onto the simplified Also worth noting is that spacetime optimization model. is different from the robot-controller-simulation Spacetime motion fitting. The tool developer finds approach to character animation [4, 5]. Although the spacetime-optimization problem whose soluboth approaches generate realistic motion, robottion closely matches the simplified character controller approaches do not solve directly for motion. motion paths. Instead, they construct controllers Spacetime edit. The animator then adjusts spacethat generate forces at a character’s joints based on time motion parameters, introduces new pose the state of the character’s dynamic properties. Once constraints, changes the character kinematics, the controllers are generated by a robot-control defines the objective function, and more. Turning Motion into a Spacetime-Optimization Problem

A

“character” is an object performing motions of its own

constraints provide the external forces necessary to satisfy these

accord. It has a finite number of kinematic DOFs and a finite

constraints. Other external forces, such as gravity and wind, may

number of muscles. DOFs usually represent joint angles of the character’s extremities; muscles exert forces on different parts of the body, thereby actuating locomotion. The goal of motion synthesis is to find a character’s desired

also be required within the environment. Finally, the animator has to ensure the dynamic correctness of the motion by constraining the acceleration of each DOF. Intuitively, the formulation ensures that F=ma holds for all DOFs at all times,

motion. This “goal motion” is rarely specified uniquely; rather, the

calling such constraints “dynamics constraints.” As long as these con-

animator looks for a motion that satisfies some set of storytelling

straints are satisfied, the resulting motion is physically correct.

requirements. Generally, these requirements are represented through constraints, external forces, and the objective function. The requirements of a sequence animating a computer-generated

Constraints determine a large portion of the motion specification. In the spacetime framework, the objective function provides the means to modify the quality of motion. Some examples of easily

character getting up from a chair might include the fact that the

measured objective functions include energy consumption, static bal-

character is sitting in the chair at time t0 and standing up at final

ance, and the amount of force exerted on the floor. Ideally, the ani-

time t1. The spacetime-optimization formulation refers to such

mator also wants to control other qualities of motion like agility and

requirements as “pose constraints,” insisting the character use its

even the character’s emotional state.

own muscles to satisfy these constraints. In addition to pose constraints, the environment imposes a num-

Motion defined this way maps straightforwardly onto a nonlinearly constrained optimization problem; the algorithm optimizes the

ber of “mechanical constraints” onto the body. For example, in

objective function, ensuring that the pose and mechanical and dynam-

order to enforce the upright position of a human, the animator con-

ics constraints are satisfied. This optimization process is a variational

strains both feet to the floor. The floor exerts forces onto the feet,

calculus problem, since it solves for functions of time, instead of

ensuring they never penetrate the floor’s surface. All mechanical

values.

c

COMMUNICATIONS OF THE ACM July 2000/Vol. 43, No. 7

55

Figure 4. Full and simplified characters for human run and broad jump.

56

July 2000/Vol. 43, No. 7 COMMUNICATIONS OF THE ACM

Motion reconstruction. The algorithm remaps the change in motion introduced by the spacetime edit onto the original motion to produce the final animation. Once the spacetime model is computed, it can be reused to generate a wide range of animations. The spacetime-edit and motion-reconstruction stages take much less time to compute than the first two stages, enabling the computation of transformed motion sequences at near-interactive speeds. Instead of solving spacetime-constraint optimizations on the full character, the tool developer first constructs a simplified character model the algorithm then uses for all spacetime optimizations (see Figure 3). Simplified models capture the minimum amount of structure necessary for the input motion task, thus capturing the essence of the input motion. Subsequent motion transformations modify this abstract representation while preserving the specific feel and uniqueness of the original motion. The simplification process draws from ideas in biomechanics research [1]. One of them, abstractly speaking, is that highly dynamic natural motion is created by “throwing the mass around,” or changing the relative position of body mass. The result is that a human arm with more than 10 DOFs can be represented by a rigid object with only three shoulder DOFs without losing much of its “mass displacement ability.” Simplification of body parts also depends on the type of input motion. For example, although simplifying an arm may work well for the human-run motion, the same simplification would not be useful for representing, say, the ball-throwing motion. In the motion libraries I’ve created so far, the simplification process reduces the number of kinematic DOFs, as well as muscle DOFs, by a factor of two to five (see Figure 4). Since each DOF is represented by hundreds of unknown coefficients during the optimization process, simplification reduces the size of the optimization by as many as 1,000 unknowns. More important, a character with fewer DOFs also creates constraints with significantly smaller nonlinearities. In practice, the optimization has no convergence problems with the simplified character models, and does not converge with the full-character model. Character simplification is performed manually by the animator applying three basic principles: DOF removal. Some body parts are fused together by removing the DOFs linking them. Elbow and wrist DOFs are usually removed for running and walking motion sequences in which they have lit-

tle influence on motion. Node subtree removal. In some cases of high-energy motion, the entire subtree of the character hierarchy can be replaced with a single object, usually a mass point with three translational DOFs. For example, the upper body of a human character can be reduced to a mass point for various jumping-motion sequences in which the upper body “catapults” in the direction of the jump. Symmetric movement. Broad-jump motions contain inherent symmetry, as both legs move in unison. Thus, the simplification process abstracts both legs into one, turning the character into a monopode, as if it were a pogo stick. Once the character model is simplified, the original motion can be mapped onto it. However, before the animator can edit the motion with spacetime constraints, the tool developer creates not only dynamically correct but realistic motion of the simplified model by identifying the spacetime-optimization problem whose solution comes very close to the original motion. The motion-transformation framework uses an abstract representation of muscles to apply forces directly onto DOFs, much like robotic servo-motors positioned at joints apply forces on robotic limbs. The algorithm places these muscles at each character DOF, ensuring the minimum set of muscles to achieve the full range of character motion. Most of the spacetime constraints fall out of the input motion. Therefore, in a run or walk sequence, the library creator specifies mechanical point constraints at every moment the foot is in contact with the floor. Similarly, a leg-kick animation defines a pose constraint at the time the leg strikes the target. An animator can also introduce additional constraints to add control during motion editing. For example, the animator might introduce a hurdle obstacle into the human-jump-motion environment, forcing the character to, say, clear a certain height during flight. With the spacetime-optimization problem defined appropriately, the animator can edit the intuitive “control knobs” of the spacetime-optimization formulation to produce a nearly inexhaustible number of different realistic motion sequences.

Spacetime Edits A spacetime-optimization formulation provides powerful and intuitive control of many aspects of the dynamic animation: pose and environment constraints, explicit kinematic and dynamic properties of the character, and the objective function. COMMUNICATIONS OF THE ACM July 2000/Vol. 43, No. 7

57

By changing existing constraints, the animator can rearrange foot placements in both space and time. For example, a human-run sequence can be changed into a zig-zag run on an uphill slope by moving the floor-contact constraints further apart while progressively elevating them. The constraint timing can also be changed; an example involves extending the floor-contact-time duration of one leg to create an animation in which the character appears to favor one leg. The animator can also introduce new obstacles along the running path, producing new constraints that might require legs to clear a specified height during the flight phase of the run. It would also be possible to alter the environment of the run by changing the gravity constant, producing a human-run sequence on, say, the moon’s surface. Changes can also be made on the character model itself. For example, the animator can change the limb dimensions or their mass distribution characteristics and observe the motion’s resulting dynamic change. The animator can remove body parts, restrict various DOFs to specific ranges, and remove DOFs altogether, effectively placing certain body parts in a cast; for example, different injured-run sequences would result from shortening a leg, making one leg heavier than the other, reducing the range of motion for the knee DOF, or removing the knee DOF altogether. Muscle properties can also affect the look of transformed motion; for example, if the force output of the muscles is limited, the character would be forced to compensate by using other muscles. Finally, the animator can change the overall “feel” of the motion by adding additional appropriately weighted objective components; for example, a softer-looking run would result from an objective component minimizing floor-impact forces. Alternatively, the run can be made to look more stable by including a measure of balance in the objective component. After each edit, the algorithm re-solves the spacetime-optimization problem and produces a new transformed animation. Since the optimization starting point is near the desired solution, and all dynamic constraints are satisfied at the outset, optimization converges rapidly. In practice, although the initial spacetime optimization may take more than 15 minutes to converge, spacetime optimizations during the editing process take less than two minutes.

Conclusion This research represents the first solution of the problem of editing captured motion that accounts 58

July 2000/Vol. 43, No. 7 COMMUNICATIONS OF THE ACM

for dynamics. The powerful high-level controls of the spacetime formulation are especially appealing, because they allow the animator to apply intuitive modifications to the input motion sequence. This intuitive control makes the algorithm particularly amenable to the motion-library paradigm. However, three important areas in realistic character animation still need to be addressed: integrating and applying multiple realistic motion sequences into a single continuous character animation; retargeting motion to different characters while preserving realism; and developing an intuitive interface to the motion-data libraries. Eventually, research in these areas would allow the reusable-motion paradigm to find its way not only into film and video games, but into every home PC. It won’t be long before every PC has sufficient 3D rendering and computing resources to let anyone use computer animation as an expressive medium, much as Web pages have emerged as a ubiquitous form of expression today. Reusable motion is a crucial concept enabling practically any animator, no matter how skilled, to become an expressive storyteller. c References 1. Blickhan, R. and Full, R. Similarity in multilegged locomotion: Bouncing like a monopode. J. Comp. Physiol. 173 (1993), 509–517. 2. Bruderlin, A. and Williams, L. Motion signal processing. In Proceedings of SIGGRAPH’95 (Los Angeles, Aug.). Addison Wesley, 1995, 97–104. 3. Gleicher, M. Motion editing with spacetime constraints. In Proceedings of the 1997 Symposium on Interactive 3D Graphics, M. Cohen and D. Zeltzer, Eds. (Apr.). ACM SIGGRAPH, 1997, 139–148. 4. Hodgins, J. Animating human motion. Scientif. Amer. 278, 3 (Mar. 1998), 64–69. 5. Hodgins, J. and Pollard, N. Adapting simulated behaviors for new characters. In Proceedings of SIGGRAPH’97 (Los Angeles, Aug.). Addison Wesley, 1997, 153–162. 6. Lee, J. and Shin, S.-Y. A hierarchical approach to interactive motion editing for human-like figures. In Proceedings of SIGGRAPH’99 (Los Angeles, Calif., Aug.). Addison Wesley Longman, 1999, 39–48. 7. Liu, Z., Gortler, S., and Cohen, M. Hierarchical spacetime control. In Proceedings of SIGGRAPH’94 (Orlando, Fla., July). ACM Press, New York, 1994, 35–42. 8. Popovi´c, Z. Motion Transformation by Physically Based Spacetime Optimization. Ph.D. thesis, Carnegie Mellon University, 1999. 9. Popovi´c, Z. and Witkin, A. Physically based motion transformation. In Proceedings of SIGGRAPH’99 (Los Angeles, Aug.). Addison Wesley Longman, 1999, 11–20. 10. Rose, C., Guenter, B., Bodenheimer, B., and Cohen, M. Efficient generation of motion transitions using spacetime constraints. In Proceedings of SIGGRAPH’96 (New Orleans, Aug.). Addison Wesley, 1996, 147–154. 11. Witkin, A. and Kass, K. Spacetime constraints. In Proceedings of SIGGRAPH’88 (Aug. 1988), 159–168. 12. Witkin, A. and Popovi´c, Z. Motion warping. In Proceedings of SIGGRAPH’95 (Los Angeles, Aug.). Addison Wesley, 1995, 105–108.

Zoran Popovi´c ([email protected]) is an assistant professor in the Department of Computer Science and Engineering at the University of Washington in Seattle. © 2000 ACM 0002-0782/00/0700 $5.00

COMMUNICATIONS OF THE ACM July 2000/Vol. 43, No. 7

59

Related Documents

Animation
November 2019 32
Animation
August 2019 26
Animation
December 2019 25