U.S. Pat. No. 11,478,707
GAP JUMPING SIMULATION OF STRETCHABLE CHARACTER IN COMPUTER GAME
AssigneeSQUARE ENIX LTD.
Issue DateDecember 8, 2020
Illustrative Figure
Abstract
Embodiments relate to generating image frames including a motion of a character with one or more stretchable body parts by either performing only blending of prestored animation clips or performing both the blending of prestored animation clips and performing inverse kinematics operations where one or more bones in the body parts are stretched or contracted. Choosing whether to perform blending or the inverse kinematics depends on whether predetermined conditions are satisfied or not. Prestored animation clips to be blended may be determined according to the speed of the character when performing the jumping motion. When performing the inverse kinematics, physical properties of the character are simulated to determine the trajectory of the character during the jumping.
Description
The figures depict various embodiments of the present disclosure for purposes of illustration only. One skilled in the art will readily recognize from the following description that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles, or benefits touted, of the disclosure described herein. DETAILED DESCRIPTION In the following description of embodiments, numerous specific details are set forth in order to provide more thorough understanding. However, note that the embodiments may be practiced without one or more of these specific details. In other instances, well-known features have not been described in detail to avoid unnecessarily complicating the description. Embodiments are described herein with reference to the figures where like reference numbers indicate identical or functionally similar elements. Also in the figures, the left most digit or digits of each reference number correspond to the figure in which the reference number is first used. Embodiments relate to generating image frames including a jumping motion of a character with one or more stretchable body parts by either performing only blending of prestored animation clips or performing, in addition to, the blending of the prestored animation clips together with inverse kinematics operations where one or more bones in the body parts are stretched or contracted. Whether to perform only blending or the additional inverse kinematics depends on whether predetermined conditions are satisfied or not. Prestored animation clips to be blended may be determined according to the speed of the character when performing the jumping motion. When performing the inverse kinematics, physical properties of the character are simulated to determine the trajectory of the character during the jumping. Among other advantages, embodiments enable efficient generation of image frames including jumping of a character with one or more stretchable body parts over a gap that is both ...
The figures depict various embodiments of the present disclosure for purposes of illustration only. One skilled in the art will readily recognize from the following description that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles, or benefits touted, of the disclosure described herein.
DETAILED DESCRIPTION
In the following description of embodiments, numerous specific details are set forth in order to provide more thorough understanding. However, note that the embodiments may be practiced without one or more of these specific details. In other instances, well-known features have not been described in detail to avoid unnecessarily complicating the description.
Embodiments are described herein with reference to the figures where like reference numbers indicate identical or functionally similar elements. Also in the figures, the left most digit or digits of each reference number correspond to the figure in which the reference number is first used.
Embodiments relate to generating image frames including a jumping motion of a character with one or more stretchable body parts by either performing only blending of prestored animation clips or performing, in addition to, the blending of the prestored animation clips together with inverse kinematics operations where one or more bones in the body parts are stretched or contracted. Whether to perform only blending or the additional inverse kinematics depends on whether predetermined conditions are satisfied or not. Prestored animation clips to be blended may be determined according to the speed of the character when performing the jumping motion. When performing the inverse kinematics, physical properties of the character are simulated to determine the trajectory of the character during the jumping.
Among other advantages, embodiments enable efficient generation of image frames including jumping of a character with one or more stretchable body parts over a gap that is both efficient while providing realistic image frames. By only blending animation clips, the jumping motion of the character may be generated with the efficient use of computing resources (e.g., processor cycles and memory space). However, in certain conditions where the blending of animation clips by itself does not result in motions that are realistic or as desired by a game developer, the character poses in the images are further determined by inverse kinematics that consume more computing resources compared to the blending of animation clips.
FIG. 1is a block diagram of a system100in which the techniques described herein may be practiced, according to an embodiment. The system100includes, among other components, a content creator110, a server120, client devices140, and a network144. In other embodiments, the system100may include additional content creators110or servers120, or may include a singular client device140.
The content creator110, the server120, and the client devices140are configured to communicate via the network144. The network144includes any combination of local area and/or wide area networks, using both wired and/or wireless communication systems. In one embodiment, the network144uses standard communications technologies and/or protocols. For example, the network144includes communication links using technologies such as Ethernet, 802.11, worldwide interoperability for microwave access (WiMAX), 3G, 4G, code division multiple access (CDMA), digital subscriber line (DSL), etc. Examples of networking protocols used for communicating via the network144include multiprotocol label switching (MPLS), transmission control protocol/Internet protocol (TCP/IP), hypertext transport protocol (HTTP), simple mail transfer protocol (SMTP), and file transfer protocol (FTP). Data exchanged over the network144may be represented using any suitable format, such as hypertext markup language (HTML) or extensible markup language (XML). In some embodiments, all or some of the communication links of the network144may be encrypted using any suitable technique or techniques.
The content creator110is a computing device, such as a personal computer, a mobile phone, a tablet, or so on, which enables a game developer to create content items (e.g., characters and environment information) for a computer game. For this purpose, the content creator110includes a processor and a memory (not shown) that stores various software modules for creating content items. The created content items are sent to the server120for storing on its memory130.
The server120is a computing device that includes a processor128and a memory130connected by a bus127. The memory130includes various executable code modules or non-executable content items122. The server120may receive and route messages between the content creator110and the client devices140. The non-executable content items122may include information on characters with stretchable body parts. Such content items may be sent to the client devices140via the network144.
The processor128is capable of executing instructions, sequential or otherwise, that specify operations to be taken, such as performance of some or all of the techniques described herein. The bus127connects the processor128to the memory130, enabling data transfer from the one to the other and vice versa. Depending upon the embodiments, the server120may include additional elements conventional to computing devices.
Each client device140is a computing device that includes a game or other software. The client device140receives data objects from the server120and uses the data objects to render graphical representations of characters and environment in which the characters take actions in the game. Different client devices140can request different data objects from the server120.
Although the embodiment ofFIG. 1is described as operating in a networked environment, in other embodiments, the client devices140are not connected via network and the computer game is executed without exchanging messages or content items over any network. In such cases, any content items associated with the compute game may be received and installed on the client devices140using a non-transitory computer readable medium such as DVD ROM, CD ROM or flash drive.
FIG. 2is a block diagram of the client device140ofFIG. 1, according to an embodiment. Depending upon the embodiment, the content creator110and/or server120may be embodied as a computing device that includes some or all of the hardware and/or software elements of the client device140described herein. The client device140, content creator110, and/or server120are any machine capable of executing instructions, and may each be a standalone device or a connected (e.g. networked) set of devices.
The client device140may include, among other components, a central processing unit (“CPU”)202, a graphics processing unit (“GPU”)204, a primary memory206, a secondary memory214, a display controller208, a user interface210, and a sound controller212that are connected by a bus216. While only a single client device140is illustrated, other embodiments may include any collection of client devices140that individually or jointly execute instructions to perform any one or more of the methodologies discussed herein.
The primary memory206is a machine-readable medium that stores instructions (e.g., software) embodying any one or more of the methodologies or functions described herein. For example, the primary memory206may store instructions that, when executed by the CPU202, configure the CPU202to perform a process700, described below in detail with reference toFIG. 7. Instructions may also reside, partially or completely, within the CPU202and/or GPU204, e.g., within cache memory, during execution of the instructions.
The term “machine-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, or associated caches and servers) able to store instructions. The term “machine-readable medium” shall also be taken to include any medium that is capable of storing instructions for execution by the device and that cause the device to perform any one or more of the methodologies disclosed herein. The term “machine-readable medium” includes, but is not limited to, data repositories in the form of solid-state memories, optical media, and magnetic media.
The secondary memory214is a memory separate from the primary memory206. Similar to the primary memory206, the secondary memory214is a machine-readable medium that stores instructions (e.g., software) embodying any one or more of the methodologies or functions described herein. For example, the primary memory206may be a hard drive of the client device140, and the secondary memory214may be a game disc.
The CPU202is processing circuitry configured to carry out the instructions stored in the primary memory206and/or secondary memory214. The CPU202may be a general-purpose or embedded processor using any of a variety of instruction set architectures (ISAs). Although a single CPU is illustrated inFIG. 2, the client device140may include multiple CPUs202. In multiprocessor systems, each of the CPUs202may commonly, but not necessarily, implement the same ISA.
The GPU204is a processing circuit specifically designed for efficient processing of graphical images. The GPU204may render objects to be displayed into a frame buffer (e.g., one that includes pixel data for an entire frame) based on instructions from the CPU202. The GPU204may include one or more graphics processors that may execute graphics software to perform a part or all of the graphics operations.
The display controller208is a circuit that generates a video signal using graphical data from the GPU204. For example, the display controller208drives a display device (e.g., a liquid crystal display (LCD) and a projector). As such, a game, including one or more characters with stretchable body parts, can be displayed as images or a sequence of image frames through the display controller208.
The sound controller212is a circuit that provides input and output of audio signals to and from the client device140. For purposes of a character, the sound controller212can provide audio signals that align with actions and objects in the computer game.
The user interface210is hardware, software, firmware, or a combination thereof that enables a user to interact with the client device140. The user interface210can include an alphanumeric input device (e.g., a keyboard) and a cursor control device (e.g., a mouse, a trackball, a joystick, a motion sensor, or other pointing instrument). For example, a user uses a keyboard and mouse to control a character's action within a game environment that includes an electronic map rendered by the client device140.
The client device140executes computer program modules for providing functionality described herein. As used herein, the term “module” refers to computer program instructions and/or other logic used to provide the specified functionality. Thus, a module can be implemented in hardware, firmware, and/or software. In some embodiments, program modules formed of executable computer program instructions are loaded into the memory206, and executed by the CPU202or the GPU204. For example, program instructions for the process700describe herein can be loaded into the primary memory206and/or secondary memory214, and executed by the CPU202and GPU204. In some embodiment, one or more of the functionality of the modules described herein may be performed by dedicated circuitry.
FIG. 3is a block diagram of software modules in a primary memory of the client device140ofFIG. 1, according to an embodiment. In particular,FIG. 3illustrates software modules in the primary memory206of the client device140. The primary memory206may store, among other modules, a game system300and an operating system (“OS”)380. The primary memory206may include other modules not illustrated inFIG. 3. Furthermore, in other embodiments, at least part of the modules inFIG. 3is stored in secondary memory214.
The game system300includes a level manager320, a physics system330, a sound module340, a terrain generator350, an animation module360, and a graphics rendering module370. These modules collectively form a “game engine” of the game system300.
The game system300include operation modules312A through312N (collectively referred to as “operation modules312”) to generate actions of characters within game environment. At least some of the operation modules312prompt changes in poses of characters to realize actions. The operation modules312perform operations that change various parameters (e.g., poses or positions of a character) based upon occurring of certain events (e.g., user interactions, expirations of time, and triggers occurring in the game).
Some operation modules312are associated with actions taken by a character with stretchable body parts. Such character may have one or more body parts that are stretchable or contractable (e.g., elastic). The character may appear to have flexible bones that stretch or contract as the character takes actions (e.g., stretch arm to punch an opponent) or as the character becomes a subject of actions by other characters (e.g., receiving punch from an opponent). One of such operation modules312is operation module312J that simulates jumping motions of the characters over a gap, as described below in detail with reference toFIG. 4.
The level manager320receives data objects from the server120and stores the level data in the primary memory206. Other modules of the game system300can request one or more levels from the level manager320, which responds to the requests by sending the level data corresponding to the requested level to the module of the game system300that sent the request.
The terrain generator350generates a complete electronic map based on electronic map data and game data. The terrain generator350receives electronic map data from the level manager320, and game and object data from the content source stored, for example, in secondary memory214.
The physics system330models and simulates the dynamics of objects in the game environment. In some embodiments, after an operation module312is invoked in the game system300, the physics system330models how an action or event associated with the operation module312affects objects or characters associated with the operation312. For example, the physics system models how a character should move when jumping a gap. Depending on the action and object, other objects and actions may become associated with the action or object.
The animation module360performs kinematic animation of objects based on the operations312from the game system300. For example, if operation module312J specifies that a character is to jump over a gap, the animation module360generates a series of character poses that collectively form a character's motion corresponding to jumping. For this purpose, the animation module360is capable of performing blending of animation clips and inverse kinematics operations. Some of these characters may have stretchable body parts with one or more bones that can be stretched or contracted during such motions. The details of the animation module360are described below with reference toFIG. 5.
The sound module340generates sounds corresponding to events and/or actions occurring in the game environment. For example, a corresponding sound may be generated when a character punches an opponent. Animation data from the animation module360or terrain information from the terrain generator350may be sent to the sound module340to enable the sound module340to produce appropriate sound data that is sent to the sound controller212.
The graphics rendering module370renders graphics using information from the animation module360to generate image frames. For this purpose, the graphics rendering module370may receive transformation matrices that indicate changes in vertices of polygons that collectively form a surface of the characters, and terrain information from the terrain generator350. The graphics rendering module370processes blended animation clips, the transformation matrices and the terrain information to generate graphical data that is sent to the GPU204for rendering images on a display screen (e.g., a display screen of the client device140or a display connected to the client device140, via the display controller208).
The OS380manages computer hardware and software resources. Specifically, the OS380acts as an intermediary between programs and the computer hardware. For example, the OS380can perform basic tasks, such as recognizing input from the user interface210and sending output to the display controller208.
Animation clip storage364stores animation clips of one or more characters. The animation clips SD1through SDZ with corresponding indices SN1through SNZ may be retrieved and processed by the animation module360to generate motions of characters including, among others, a jumping motion of a character with stretchable body parts.
FIG. 4is a block diagram of the operation module312J for performing a jumping operation of a character with stretchable body parts. The operation module312J may include, among others, jump input detection module412, markup detection module416, and animation selection module424. Although these modules are illustrated as being separate code modules inFIG. 4, one or more of these modules may be combined into a single module or be split into more sub-modules.
The jump input detection module412detects whether a character controlled by a user should perform a jumping operation. For this purpose, jump input detection module412detects user input received via the user interface210indicating a jump motion (e.g., pressing of a button on game console controller). Depending on the stage, phase of the game or previous/subsequent user inputs, the same user input may indicate different actions of the character.
If the user input indicating the jump motion is detected, the markup detection module416that detects various points of interest in the virtual environment of a character. The virtual environment may be marked up with various objects and points of interest (e.g., ledges, edges, hurdles, etc.) Among other things, the markup detection module416determines whether a gap is present in the path of the character, and if so, determines the configuration (e.g., width and height difference) of the gap. In one or more embodiments, the gap may be indicated by markers that indicate various objects in the simulated environment. The markup detection module416may also send, to the animation selection module424, information on the distance or height to a point (e.g., ledge) at the opposite side of the gap relative to the location where the character has initiated the jump, as determined by the jump input detection module412.
The markup detection module416also detects a location where one or more hands of the character can attach to simulate grabbing of an object by the character during or after the jump. The grabbable object can be, for example, a pole hanging over a gap, a ledge or an edge at the other side of the gap. The markup detection module416creates an imaginary attach point and determines its location along a spline connecting marks in the simulated environment. When the user input indicating a jump is detected at the jump detection module412, the attach point within the trajectory of the character may be identified based on the markers. The permissible range of grabbing operation may be set so that the attachment point is not generated if the character's trajectory is above a threshold distance from markers representing a grabbable object. In one or more embodiments, different types of grabbing (e.g., grabbing by a single hand or both hands) may be permitted depending on the markers of the grabbable object.
The animation selection module424determines whether predetermined conditions for using only blended animation clip without using inverse kinematics is satisfied. The predetermined conditions may indicate that the character can reach the other side of the gap without stretching one or more limbs of the character. The animation selection module424may determine that the predetermined conditions are met, for example, when a distance to the opposite side of the gap from a point of initiating the jumping is below a threshold, a height difference at both sides of the gap is less than another threshold, and a speed of the character is fast enough to jump across the gap. As another example, the predetermined conditions may be expressed in terms of an equation including the distance to the opposite side of the gap, the height difference at both sides of the gap, and the speed of the character. If these conditions are satisfied, then the character can jump over the gap without stretching any limbs. In such case, sufficiently realistic image frames may be generated using only blending of animation clips prestored in the animation clip storage364without performing any inverse kinematic operations. Hence, the animation selection module424may send jump selection signal510indicating the use of blended animation clip. In this way, the use of computing resources associated with simulating the jumping motions can be reduced.
Conversely, if the animation selection module424determines that the character's limb (e.g., arm) should be stretched to reach the opposite side of the gap, the animation module424generates the jump selection signal510to indicate the use of kinematics engine522so that motions of the character with one or more stretched limbs can be generated using the blending of animation clips followed by stretchy inverse kinematics.
In other embodiments, the predetermined conditions may be defined by presence of a marker indicative of an object where one or more hands of the character can attach. The presence or absence of such objects along the vicinity of the character's trajectory may force the use of the blending or inverse kinematics to generate the character's jumping motions. Various other predetermined conditions may be employed to balance the realistic jumping motion with the use of reduced computing resources.
The animation module360generates animation data in the form of transformation matrix542or blended animation clip548depending on the type of jump indicated in the jump selection signal510. The blended animation clip548represent a series of poses of the character. For this purpose, the animation module360may include, among other components, selector module518, kinematics engine522, and animation blender module530.
The selector module518receives the jump selection signal510from the operation module312J and enables or disables the kinematics engine522(subsequent to the animation blender module530) to generate the animation data of the character. If the kinematics engine522is disabled, the selector module518outputs blended animation clip548generated by the animation blender module530as the output of the animation module360. If the kinematics engine522is enabled, the selector module518forward the blended animation clip548to the kinematics engine522to modify the character's poses as represented by the blended animation clip548to transformation matrix542representing update character's poses.
The kinematics engine522includes stretching inverse kinematic engine526that generates transformation matrix542for a character that has at least one stretchable body part. For this purpose, the kinematics engine522may include stretching inverse kinematics (IK) engine526as described, for example, in U.S. patent application Ser. No. 16/915,732, filed on Jun. 29, 2020 and entitled “Performing Simulation of Stretchable Character in Computer Game,” which is incorporated by reference herein in its entirety. The kinematics engine522may receive physical properties512(e.g., a position, linear velocity, angular velocity, mass, inertia tensor and orientation of the character) from the physics system330, character movement information514indicating constraints or features of poses and movements of the character, and the blended animation clip548from the animation blender module530via the selector518. The transformation matrix542generated as a result of the processing at kinematics engine522is sent to the graphics rendering module370for further processing.
The animation blender module530receives jump parameter532from the operation module312J, retrieves a subset of animation clip data from the animation clip storage364, blends the retrieved animation clip data, and generates blended animation clip548. The animation blender530may receive jump parameter532(e.g., the speed of the character when jumping) from the operation module312J and identify indices of appropriate animation clips, and retrieves a subset of animation clips of the character from the animation clip storage364based on the identified indices. The animation blender530then blends the subset of animation clips by applying time-varying weights as determined from the jump parameters532. The animation blender530may perform a method as well known in the art. The blended animation clip548generated as a result is sent to the graphics rendering module370for further processing. In one or more embodiments, the animation blender module530is part of a forward kinematics module.
FIGS. 6A through 6Eare conceptual diagrams illustrating various jumping motions of characters, according to embodiments.FIG. 6Aillustrates character CA at time (t−1) on a platform L1A, jumping over a gap GA at time (t) and landing on another platform L2A at time (t+1). The width DA of the gap GA is sufficiently small that the character can jump without using its stretching capabilities. Hence, the animation selection module424generates the jump selection signal510indicating that only the animation blender530be used without the kinematics engine522to generate a motion of the character jumping.
FIG. 6Balso illustrates a scenario where character CB runs on platform L1B at time (t−1), jumps over a gap GB of width DB at time (t). But due to the wider width DB of gap GB, the character CB grabs the edge (or a ledge) of platform L2B at time (t+1) using both arms, pulls herself up and starts to walk or run on platform L2B at time (t+2). The edge or the ledge of platform L2B is marked (shown as X) as locations where the character's hand can attach. Hence, the markup detection module416identifies markers indicating the edge or the ledge at which character's hand or hands can attach during the jumping motion. The width or distance DB is detected by the markup detection module416. Based on the distance from the character's jumping point on platform L1B to the edge or the ledge of platform L2B and speed/trajectory of the character, the animation selection module424determines that the character CB can grab onto the edge or the ledge without using stretching of the character's arms. Hence, the animation selection module424generates the jump selection signal510indicating that only the animation blender530be used without the kinematics engine522used to generate a motion of the character jumping.
FIG. 6Cillustrates a scenario where character CC jumps from platform L1C at time (t−1) but starts to drop into the gap GC of width DC at time (t) because the width DC and/or height different HC is large or the speed of the character CB is insufficient. The markup detection module416detects the width DC of the gap GD as well as the height difference HC, and sends the information to the animation selection module424. The animation selection module424determines that the combination of the distance DC, height difference HC, the jumping on the platform L1C, and/or the speed of the character CC would allow the character CB to simulate grabbing the edge or ledge of platform L2C by attaching one or more hands of the character to locations marked with “X” when the character's arms are stretched at time (t+1). Hence, the animation selection module424generates the jump selection signal510to indicate that kinematic engine522is to be enabled to generate transformation matrix using stretchy inverse kinematics. After grabbing the edge or ledge and pulling herself out of the gap GC, the character CC walks or runs on platform L2C at time (t+2).
FIG. 6Dillustrates a scenario where a character CD jumps from platform L1D at time (t−1) and grabs onto an object GO (e.g., a pole) at time (t) over a gap GD of width DD and height difference of HD. In this example, a user input may be received to grab the object GO after the character CD jumps from the platform L1D but the animation selection module424determines that the object GO is not reachable by the character CD unless her arm is stretched. Hence, the animation selection module424generates the jump selection signal510to indicate that the kinematic engine522is to be used in addition to the animation blender530. The kinematic engine522generates poses of the character CD as the character stretches her arm and swings to the other side under the object GO at time (t+1) using stretchy inverse kinematics, taking into account the force applied to the character's body due to the grabbing of the object GO. Simultaneously, the physics system330determines the trajectory of the character's center of gravity based on physical parameters (e.g., velocity, mass, rotation) of the character. Hence, the kinematic engine522generates more realistic image frames showing a sequence of the character's poses during the jump, grab and release operations. After swinging under the object GO, the character lands on the platform L2D at time (t+2).
FIG. 6Eillustrates a scenario where a character CE jumps from platform LE at time (t−1), stretches her legs at time (t) to jump over a gap GE of width DE and lands on platform L2E at time (t+1). Contrary to the scenarios ofFIGS. 6A through 6D, the landing platform L2E does not have any locations with markers where the characters' hand or hands can attach but the distance DE is too far for the character CE to reach. In such case, the legs of the character CE may be stretched to accomplish a successful jump as shown inFIG. 6E. Such jumping motion may be generated either by blending animation clips stored in animation clip storage364or performing stretchy inverse kinematics so that the bones in the legs are extended. The stretching of legs may also be used in various other situations.
Contrary to the scenarios described above with reference toFIGS. 6A through 6E, if the operation module312J (specifically, the animation selection module424) determines that the character cannot make a successful jump, the animation selection module424may instruct the animation module360to replay a scene of the character falling through the gap.
Embodiments described above with reference toFIGS. 6A through 6Eare merely illustrative. In one or more embodiments, a character may perform stretching of legs during the jump in the scenarios ofFIGS. 6B through 6Ewhile grabbing to a grabbable object only when further user inputs are received. Moreover, both the arms and legs of the character may be extended when jumping over a gap that is wider than a threshold level while extending only legs when the gap is narrower than the threshold. The selection between the use of blended animation clips and the use of inverse kinematics may be determined based on criteria that produces realistic motions, as needed, to reduce the computation resources used to simulate the jumping motions.
FIGS. 7A through 7Care diagrams illustrating image frames including a series of motions of the character associated with the jumping motions, according to an embodiment.FIG. 7Aillustrates a character708running on platform702towards platform704that is separated by gap710.FIG. 7Billustrates the same character708grabbing onto a ledge on the platform704as she drops into the gap710. As shown inFIG. 7B, the character's arm is stretched using the blending of the animation clips and the stretchy inverse kinematics to increase the lengths of the bones in the arm.
FIG. 7Cis a diagram where the gap710is sufficiently narrow such that the character can grab onto the ledge on the platform704without stretching her arm. A series of subsequent image frames includes the motion of the character generated by blending prestored animation clips without using the stretchy inverse kinematics.
FIG. 8is a flowchart illustrating a process of simulating a jumping motion of a character, according to one embodiment. A jump command is received810from a user instructing the character to jump over a gap. Such command is detected at the jump input detection module412, as described above in detail with reference toFIG. 4.
The markup detection module416determines814whether a gap is present in the path of the character, and if so, it detects the configuration (e.g., width and height) of the gap. Based on the determination of the markup detection module416, the animation selection module424determines818whether predetermined conditions are met. The predetermined conditions may, for example, be described as a function of a distance to the opposite side of the gap from a point of initiating the jumping, a height difference at both sides of the gap, and the speed of the character.
If the predetermine conditions are satisfied, then the jumping motion can be generated by blending prestored animation clips. Hence, animation clips for blending are determined822, for example, based on the speed of the character at the time the character is initiating the jump. Then the animation blender530may retrieve and blend826the selected animation clips to generate a blended animation clip.
Conversely, if the predetermined conditions are not satisfied, then the presence of a location to which one or more hands of the character can attach (e.g., pole, ledge or edge) is determined830by the markup detection module416. Such determination may be made, for example, by detecting markers indicative of objects or locations where the character's hand can attach in the vicinity of the trajectory of the character during her jumping motion.
Blended poses of the character are generated832by blending stored animation clips. Then the blended poses are updated884using stretchy inverse kinematics to generate poses of the character during the jump. The character's poses indicate stretching at least one arm of the character to the grabbable object during the jump. Moreover, the movement of the character is determined838using physics.
The processes, steps and their sequences illustrated inFIG. 8are merely illustrative and various modifications can be made to the process. For example, the generating834of poses may be performed in parallel with the determining838of the movement of character or be reversed in order.
Although above embodiments are described primarily with the predetermined conditions related to determining whether a stretchable character may reach the opposite side of the gap by jumping, other conditions may also be used to determine whether image frames including the character may be generated by blending images or performing inverse kinematics. For example, if the size of the character shown in the scene may be used as alternative or additional factors for determining whether to generate the image frames through blending or inverse kinematics. Also, the processing capacity of the client devices140may also be taken into account to determine how the image frames are to be generated. For example, a client device is low processing capabilities may use blending more widely while sparingly using the inverse kinematics in limited circumstances.
Further, although the above embodiments are described with reference to the motions associated with jumping over a gap, the same principle may be applied to other motions such as jumping onto a wall.
While particular embodiments and applications have been illustrated and described, it is to be understood that the invention is not limited to the precise construction and components disclosed herein and that various modifications, changes and variations which will be apparent to those skilled in the art may be made in the arrangement, operation and details of the method and apparatus disclosed herein without departing from the spirit and scope of the present disclosure.
Claims
- A computer-implemented method for simulating a jumping motion of a character in a computer game, comprising: determining whether predetermined conditions associated with jumping of a character over a gap is satisfied;responsive to satisfying the predetermined conditions, generating a first series of image frames including a first jumping motion of the character over the gap based on prestored animation data of the character without performing inverse kinematics that simulate stretching of at least one bone in a limb of the character;and responsive to not satisfying the predetermined conditions, generating a second series of image frames including a second jumping motion of the character over the gap by at least performing the inverse kinematics that simulate the stretching of the at least one bone in a limb of the character.
- The method of claim 1, wherein the predetermined conditions are associated with one or more of (i) a distance to the opposite side of the gap from a point of initiating the jumping, (ii) a height difference at both sides of the gap, and (iii) a speed of the character when the character is performing the jumping, and satisfying the predetermined conditions indicates that the character reaches an opposite side of the gap without stretching one or more limbs of the character.
- The method of claim 1, wherein performing the inverse kinematics comprises determining a pose the character by changing a length of at least one bone in the limb of the character.
- The method of claim 3, further comprising determining poses of the character and a trajectory of the character during the jumping motion by simulating motions of the character according to physical properties of the character including at least one of a position, linear velocity, angular velocity, mass, inertia tensor and orientation of the character.
- The method of claim 4, wherein the character's hand attaches to a location during the jumping before landing on the other side of the gap, and wherein the simulating the motions of the character is further based on the inverse kinematics and a force applied to the character by the attached object.
- The method of claim 5, further comprising determining the location based on one or more markers in a simulated environment responsive to receiving a user command indicating a grabbing or jumping action.
- The method of claim 1, wherein the first series of image frames are generated by blending a subset of prestored animation clips of the character.
- The method of claim 7, wherein the subset of prestored animation clips is selected based on a speed of the character when the character is performing the jumping.
- The method of claim 1, further comprising receiving a command from a user to initiate the jumping before generating the first series of image frames or the second series of image frames.
- The method of claim 1, wherein the first series of image frames include the character grabbing a wall or a ledge at the other side of the gap using both arms of the character, and wherein the second series of image frames include the character grabbing an object placed between the one side of the gap and the other side of the gap using a single arm or grabbing the wall or the ledge at the other side of the gap using the single arm.
- A non-transitory computer-readable storage medium storing instructions thereon, the instructions when executed by one or more processors cause the one or more processors to: determine whether predetermined conditions associated with jumping of a character over a gap is satisfied;responsive to satisfying the predetermined conditions, generate a first series of image frames including a first jumping motion of the character over the gap based on prestored animation data of the character without performing inverse kinematics that simulate stretching of at least one bone in a limb of the character;and responsive to not satisfying the predetermined conditions, generate a second series of image frames including a second jumping motion of the character over the gap by at least performing the inverse kinematics that simulate the stretching of the at least one bone in a limb of the character.
- The computer-readable storage medium of claim 11, wherein the predetermined conditions are associated with one or more of (i) a distance to the opposite side of the gap from a point of initiating the jumping, (ii) a height difference at both sides of the gap, and (iii) a speed of the character when the character is performing the jumping, and satisfying the predetermined conditions indicates that the character reaches an opposite side of the gap without stretching one or more limbs of the character.
- The computer-readable storage medium of claim 11, further instructions to perform the inverse kinematics comprise instructions to determine a pose the character by changing a length of the at least one bone in the limb of the character.
- The computer-readable storage medium of claim 13, further comprising instructions that cause the one or more processors to: determine poses of the character and a trajectory of the character during the jumping by simulating motions of the character according to physical properties of the character including at least one of a position, linear velocity, angular velocity, mass, inertia tensor and orientation of the character.
- The computer-readable storage medium of claim 14, wherein the character's hand attaches to a location during the jumping before landing on the other side of the gap, and wherein the simulating the motions of the character is further based on a force applied to the character by the attached object and the inverse kinematics.
- The computer-readable storage medium of claim 15, further comprising instructions that cause the one or more processors to determine the location based on one or more markers in in a simulated environment responsive to receiving a user command indicating grabbing or jumping operation.
- The computer-readable storage medium of claim 11, wherein the first series of image frames are generated by blending a subset of prestored animation clips of the character, wherein the subset of prestored animation clips is selected based on a speed of the character when the character is performing the jumping motion.
- The computer-readable storage medium of claim 11, wherein instructions cause the one or more processors to receive a command from a user to initiate the jumping motion before generating the first series of image frames or the second series of image frames.
- The computer-readable storage medium of claim 10, wherein the first series of image frames include the character grabbing a wall or a ledge at the other side of the gap using both arms of the character, and wherein the second series of image frames include the character grabbing an object placed between the one side of the gap and the other side of the gap using a single arm or grabbing the wall or the ledge at the other side of the gap using the single arm.
- A non-transitory computer-readable storage medium storing image frames generated by: determining whether predetermined conditions associated with jumping of a character over a gap is satisfied;responsive to satisfying the predetermined conditions, generating a first series of image frames including a first jumping motion of the character over the gap based on prestored animation data of the character without performing inverse kinematics that simulate stretching of at least one bone in a limb of the character;and responsive to not satisfying the predetermined conditions, generating a second series of image frames including a second jumping motion of the character over the gap by at least performing the inverse kinematics that simulate the stretching of the at least one bone in a limb of the character.
Disclaimer: Data collected from the USPTO and may be malformed, incomplete, and/or otherwise inaccurate.