U.S. Pat. No. 12,141,614

SCENE ENTITY PROCESSING USING FLATTENED LIST OF SUB-ITEMS IN COMPUTER GAME

AssigneeSquare Enix Ltd.

Issue DateOctober 12, 2021

Illustrative Figure

Abstract

Embodiments relate to storing hierarchically structured sub-items of scene entities in a flattened list of sub-items and performing time-constrained tasks on the sub-items in the flattened list. By storing the sub-items in the flattened list, an approximate time for processing the sub-items can be estimated more accurately, and therefore, reduces the likelihood of making overly conservative estimate of time for processing the sub-items. One or more sub-items of updated scene entities are extracted by a plurality of collectors that are executed in parallel to store the one or more sub-items in the flattened list. The sub-items are then accessed by multiple tasks executed in parallel to determine priority information associated with inclusion and rendering in subsequent frames. Sub-items with higher priority according to the priority information is given higher priority for retrieving from secondary memory and saving in primary memory.

Description

The figures depict various embodiments of the present disclosure for purposes of illustration only. One skilled in the art will readily recognize from the following description that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles, or benefits touted, of the disclosure described herein. DETAILED DESCRIPTION In the following description of embodiments, numerous specific details are set forth in order to provide more thorough understanding. However, note that the embodiments may be practiced without one or more of these specific details. In other instances, well-known features have not been described in detail to avoid unnecessarily complicating the description. Embodiments relate to converting hierarchically structured sub-items of scene entities into a flattened list of sub-items and performing time-constrained tasks on the sub-items from the flattened list. By storing the sub-items in the flattened list, an approximate time for processing the sub-items can be estimated more accurately, and therefore, the likelihood of making overly conservative estimate of time for processing the sub-items may be reduced. One or more sub-items of updated scene entities are extracted by a plurality of collectors that are executed in parallel to extract and store the one or more sub-items in the flattened list. The sub-items are then accessed by multiple prioritization jobs executed in parallel to determine priority information associated with inclusion and rendering of the sub-items in subsequent frames. Data associated with sub-items of higher priority according to the priority information is given higher priority for transferring from secondary memory to primary memory. A scene entity described herein refers to a data object that is processed for including in a scene. The scene entity may be hierarchically structured to include one or more sub-items for graphics rendering operation. Different scene entities may include different number of sub-items and/or ...

The figures depict various embodiments of the present disclosure for purposes of illustration only. One skilled in the art will readily recognize from the following description that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles, or benefits touted, of the disclosure described herein.

DETAILED DESCRIPTION

In the following description of embodiments, numerous specific details are set forth in order to provide more thorough understanding. However, note that the embodiments may be practiced without one or more of these specific details. In other instances, well-known features have not been described in detail to avoid unnecessarily complicating the description.

Embodiments relate to converting hierarchically structured sub-items of scene entities into a flattened list of sub-items and performing time-constrained tasks on the sub-items from the flattened list. By storing the sub-items in the flattened list, an approximate time for processing the sub-items can be estimated more accurately, and therefore, the likelihood of making overly conservative estimate of time for processing the sub-items may be reduced. One or more sub-items of updated scene entities are extracted by a plurality of collectors that are executed in parallel to extract and store the one or more sub-items in the flattened list. The sub-items are then accessed by multiple prioritization jobs executed in parallel to determine priority information associated with inclusion and rendering of the sub-items in subsequent frames. Data associated with sub-items of higher priority according to the priority information is given higher priority for transferring from secondary memory to primary memory.

A scene entity described herein refers to a data object that is processed for including in a scene. The scene entity may be hierarchically structured to include one or more sub-items for graphics rendering operation. Different scene entities may include different number of sub-items and/or different levels of hierarchy. A scene entity (e.g., a tree) may be divided into multiple sub-entities (e.g., branches and fruits).

A sub-item described herein refers to a unit of data associated with graphics operation of a scene entity or a sub-entity. The sub-item may, for example, include identification information of mesh data or texture data of a scene entity or its sub-entity. The identification information may indicate a memory location of the mesh data or the texture data in primary memory or secondary memory. Different scene entities or sub-entities may have different types and/or numbers of sub-items. The sub-item may also include meta data associated with the corresponding mesh data or texture data.

Example Architecture of Game Environment

FIG.1is a block diagram of a system100in which the techniques described herein may be practiced, according to an embodiment. The system100includes a content creator110, a server120, client devices140, and a network144. In other embodiments the system100may include additional content creators110or servers120, or may include a single client device140.

The content creator110, the server120, and the client devices140are configured to communicate via the network144. The network144includes any combination of local area and/or wide area networks, using both wired and/or wireless communication systems. In one embodiment, the network144uses standard communications technologies and/or protocols. For example, the network144includes communication links using technologies such as Ethernet, 802.11, worldwide interoperability for microwave access (WiMAX), 3G, 4G, code division multiple access (CDMA), digital subscriber line (DSL), etc. Examples of networking protocols used for communicating via the network144include multiprotocol label switching (MPLS), transmission control protocol/Internet protocol (TCP/IP), hypertext transport protocol (HTTP), simple mail transfer protocol (SMTP), and file transfer protocol (FTP). Data exchanged over the network144may be represented using any suitable format, such as hypertext markup language (HTML) or extensible markup language (XML). In some embodiments, all or some of the communication links of the network144may be encrypted using any suitable technique or techniques.

The content creator110is a computing device, such as a gaming system, a personal computer, a mobile phone, a tablet, or so on. A user of the content creator110creates various game resources such as images, terrains, characters, and gaming objects. The created game resources may be dynamic and may change over time so that the game resources may be updated and sent to the server120for storing in the server120.

The server120is a computing device that stores, among others, game resource130for access by the client devices140. Part or the game resource130may be transferred to the client device140to execute a computer game. The server120may also manage user accounts and other management functions such as billing.

Each client device140is a computing device that includes a game or other software. The client device140receives the game resources (e.g., data objects) from the server120and uses the game resources to execute a computer game. Different client devices140can request different game resources from the server120.

Although the game resources are described as being sent from the server120to the client devices140inFIG.1, embodiments described herein may also be used in games where game resources are stored and installed entirely or partially from a tangible storage medium (e.g., DVD or CD ROM). Regardless of whether the game resources are received via a server (as shown inFIG.1) or retrieved from the tangible storage medium, a large portion of the game resources is typically stored in secondary memory and a smaller portion of the game resources is stored in primary memory for a higher access speed.

Example Embodiment of Client Device

FIG.2is a block diagram of the client device140ofFIG.1, according to an embodiment. Depending upon the embodiment, the content creator110and/or server120may comprise a computing device that includes some or all of the hardware and/or software elements of the client device140described herein. The client device140, content creator110, and/or server120are any machine capable of executing instructions, and may each be a standalone device or a connected (e.g. networked) set of devices. For example, in one embodiment, the content creator110is a client device140.

The client device140includes a central processing unit (“CPU”)202, a graphics processing unit (“GPU”)204, a primary memory206, a secondary memory214, a display controller208, a user interface210, and a sound controller212that are connected by a bus216. While only a single client device140is illustrated, other embodiments may include any collection of client devices140that individually or jointly execute instructions to perform any one or more of the methodologies discussed herein.

The primary memory206is a machine-readable medium that stores instructions (e.g., software) embodying any one or more of the methodologies or functions described herein. For example, the primary memory206may store instructions that, when executed by the CPU202, configure the CPU202to perform a process, described below in detail with reference toFIG.6. Instructions may also reside, partially or completely, within the CPU202and/or GPU204, e.g., within cache memory, during execution of the instructions.

The term “machine-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, or associated caches and servers) able to store instructions. The term “machine-readable medium” shall also be taken to include any medium that is capable of storing instructions for execution by the device and that cause the device to perform any one or more of the methodologies disclosed herein. The term “machine-readable medium” includes, but is not limited to, data repositories in the form of solid-state memories, optical media, and magnetic media.

The secondary memory214is a memory separate from the primary memory206but has a slower access speed relative to the primary memory206. Similar to the primary memory206, the secondary memory214is a machine-readable medium that stores instructions (e.g., software) embodying any one or more of the methodologies or functions described herein. For example, the primary memory206may be a hard drive of the client device140, and the secondary memory214may be a game disc for the game that uses the terrain. As a specific example, the primary memory206may store a game system300that uses hex data stored on the secondary memory214. Primary memory206and secondary memory214are described in greater detail with reference toFIG.3below. In one or more embodiments, the second memory214has larger memory space compared to the first memory206.

The CPU202is processing circuitry configured to carry out the instructions stored in the primary memory206and/or secondary memory214. The CPU202may be a general-purpose or embedded processor using any of a variety of instruction set architectures (ISAs). Although a single CPU is illustrated inFIG.2, the client device140may include multiple CPUs202. In multiprocessor systems, each of the CPUs202may commonly, but not necessarily, implement the same ISA.

The GPU204is a processing circuit specifically designed for efficient processing of graphical images. The GPU204may render objects to be displayed into a frame buffer (e.g., one that includes pixel data for an entire frame) based on instructions from the CPU202. The GPU204may include one or more graphics processors that may execute graphics software to perform a part or all of the graphics operations.

The display controller208is a circuit that generates a video signal using graphical data from the GPU204. For example, the display controller208drives a display device (e.g., a liquid crystal display (LCD) and a projector). As such, a game, including terrain, can be displayed as images or a video sequence through the display controller208.

The sound controller212is a circuit that provides input and output of audio signals to and from the client device140.

The user interface210is hardware, software, firmware, or a combination thereof that enables a user to interact with the client device140. The user interface210can include an alphanumeric input device (e.g., a keyboard) and a cursor control device (e.g., a mouse, a trackball, a joystick, a motion sensor, or other pointing instrument). For example, a user uses a keyboard and mouse to control a character's action within a game environment that includes a terrain or hive rendered by the client device140.

The client device140executes computer program modules for providing functionality described herein. As used herein, the term “module” refers to computer program instructions and/or other logic used to provide the specified functionality. Thus, a module can be implemented in hardware, firmware, and/or software. In some embodiments, program modules formed of executable computer program instructions are loaded into the memory206, and executed by the CPU202or the GPU204. For example, program instructions for the process describe herein can be loaded into the primary memory206and/or secondary memory214, and executed by the CPU202and GPU204.

Example Embodiment of Software Architecture in Game System

FIG.3is a block diagram of software modules the client device140ofFIG.1, according to one embodiment. In particular,FIG.3illustrates software modules in the primary memory206and the secondary memory214of the client device140. The primary memory206may store, among other modules, a game system300and an operating system (“OS”)380. The secondary memory214may include, among other modules, a resource storage318. The primary memory206and secondary memory214may include other modules not illustrated inFIG.3. Furthermore, in other embodiments, the primary memory206and secondary memory214may each store software modules and data represented herein as stored in the other.

The game system300includes a physics system328, a sound module332, an animation module340, a scene resource collector348, a scene resource retriever352, a scene resource storage356and a graphics rendering module344. These modules collectively form a “game engine” of the game system300. The game system300may include further software modules or omit one or more of the modules illustrated inFIG.3.

The game system300performs operations312A through312N (collectively referred to as “the operations312”) to accomplish various actions and tasks of the game. Specifically, the game system300may perform operations such as simulating interactions with objects in the game (e.g., the user's game character fighting an opponent character), produce cutscenes, and generate environment of scenes. The operations312refer to computing operations that result in changes in various parameters (e.g., states of objects and user status) based upon certain events (e.g., user interactions, expirations of time, and triggers occurring in the game). Some of these operations312may interoperate with other modules such as the physics system328or animation module340.

One or more of these operations312are associated with changes in scene entities that are displayed in scenes of the game. Examples of such operations include changing appearance of a character, removing branches or fruits from a tree, and creating cracks or holes in a wall. When such operations are performed, a scene entity (e.g., character, tree, wall) is updated; and hence, sub-items (e.g., texture or mesh) of the scene entity or its sub-entities may be updated accordingly. When such updating occurs, the sub-items of the scene entity or its sub-entity are changed, and a process to update these sub-items may be performed to reflect the change to the rendering process performed at the graphics rendering module344.

The physics system328models and simulates the dynamics of objects in the game environment. After an operation312is initiated in the game system300, the physics system328models how the action affects the object associated with the operation312. For example, the physics system models a rock as it rolls down a hill. Depending on the action and object, other objects and actions may become associated with the action or object. For example, a thrown rock may knock over another object. This may trigger a new operation312where the object is hit by the rock. The physics system328uses terrain information, e.g., data pertaining to a terrain generated by the terrain generator350, when modeling and simulating the dynamics of the objects in the game environment. For example, returning to the rolling rock example, the physics system328may determine that the rock must roll down the hill by identifying that the rock is positioned within the terrain such that it is on a slope of the hill.

The animation system340is a module that performs kinematic animation of objects or the game environment based on the operations312from the game system300. For example, if an operation312specifies that a robotic arm is moving, the animation system animates the kinematics of the arm movement. The animation system340may include any number of specialized modules which perform specific animation tasks.

The sound module332generates sounds corresponding to actions occurring in the game environment. Animation data from the animation system340may be sent to the sound module332to enable the sound module332to produce sound. The sound module332sends sound data to the sound controller212.

The graphics rendering module344renders graphics from various sources (e.g., the animation system340) to generate scenes of the game environment. For example, the graphics rendering module344receives mesh data and texture data of a character from the scene resource storage356and generates graphics data for rendering the character. The graphics rendering module344sends graphical data to the GPU204to render images on a display, e.g., a display of the client device140or a display connected to the client device140, via the display controller208.

The scene resource collector348is a software module that detects updated scene entities as a result of operations312and prioritizes retrieval of relevant sub-items from the resource storage318of the secondary memory214into the scene resource storage356. Although it is desirable that all updates to scene entities be reflected immediately in subsequent frames of scenes, memory availability in the primary memory206and the data transfer rate of the second memory214may delay transfer of relevant data associated with sub-items from the resource storage318to the scene resource storage356. Hence, the scene resource collector348may provide priority information and identification information of the updated sub-items to the scene resource retriever352so that more important sub-items may have their data transferred earlier from the resource storage318than data for less important sub-items, as described below in detail with reference toFIG.5.

The scene resource retriever352is a software module that retrieves data from the resource storage318and stores it in the scene resource storage356. Due to the limited memory allocated to the scene resource storage356, not all data for scene entities can be stored in the scene resource storage356at the same time. Hence, only a subset of data for scene entities that are likely to be used in rendering future frames or data for important scene entities are stored in the scene resource retriever352. The scene resource retriever352may take into account the priority information provided by the scene resource collector348to decide how fast the data is to be retrieved from the resource storage318and stored in the scene resource storage356.

The scene resource storage356is memory space in the primary memory206assigned to store data for the scene entities. The scene resource storage356stores, for example, texture data and mesh data (e.g., sub-items) of various scene entities or their sub-entities. Due to the limited storage space allocated to the scene resource storage356, data for only a subset of texture data and mesh data of scene entities are stored and available from the scene resource storage356for access by the graphics rendering module344. The texture data and mesh data in the scene resource storage356may be selectively removed to make space for other texture data and mesh data that are likely to be used in the near future.

The OS380manages computer hardware and software resources. Specifically, the OS380acts as an intermediary between programs and the computer hardware. For example, the OS380can perform basic tasks, such as recognizing input from the user interface210and sending output to the display controller208.

Scene Entity Data Structure and Update Processing

FIG.4Ais a data structure diagram of scene entities, according to an embodiment. A scene may include multiple scene entities SE1 through SEN. Some of these entities may include multiple sub-entities. As shown in the example ofFIG.4A, scene entity SE1 may include two sub-entities 1 and 2. Each of the sub-entities may also include one or more sub-items. For example, sub-entity 1 may have three sub-entities: one mesh data (i.e., mesh 1) and one texture data (i.e., mesh 2). On the other hand, sub-entity 2 may have two sub-entities: one mesh data (i.e., mesh 2) and two texture data (i.e., texture 1 and texture 2). Hence, the scene entity SE1 has a total of 5 sub-items. Other scene entities may have different numbers of sub-entities and different number of sub-items as well as different levels of hierarchy.

The scene resource collector348receives updated scene entities whose sub-items are updated due to the operations312. These scene entities have one or more updated sub-items whose data may not be available from the scene resource storage356. The updated sub-items may be transferred in a sequential order, according to their priority, from the resource storage318in the secondary memory214to the scene resource storage356. Hence, a task for determining such priority (hereinafter referred to as “prioritizing task”) may be performed on the updated scene entities or the updated sub-items.FIG.4Bis diagram illustrating a list of updated scene entities, according to one embodiment. In this example, scene entities SE1, SE3, SE11, SE15, SE22 . . . SEZ are updated while other scene entities (e.g., SE2, SE4 through SE10, etc.) are not updated.

One way of performing such prioritizing task is to perform processing in the units of scene entities. That is, all of the sub-items in one scene entity (e.g., SE1) are processed, followed by all of the sub-items in a next scene entity (e.g., SE3). When such method is used, to process all the sub-entities, the prioritizing task involves traversing the hierarchy of sub-items in a scene entity and extracting the sub-items. For example, in order to process the scene entity SE1, the prioritizing task would first process mesh 1 followed by texture 1, then mesh 2 followed by textures 2 and 3, etc. However, each of the scene entities may have different numbers of sub-items as well as different levels of hierarchy. Hence, the time for performing the prioritizing task on each scene entity can differ, depending on the number of sub-items and the number of hierarchy levels in each scene entity. The amount of time for performing the prioritizing task on the scene entities may be restricted to a time constraint or a time cycle. Hence, the scheduling or planning for the prioritizing task on the scene entities tend to become overly conservative, resulting in processing of fewer scene entities than otherwise possible because of penalty or loss associated with stopping the processing of a scene entity midstream.

Hence, embodiments generate a flattened list of sub-items and use a sub-items as a unit for performing such prioritizing task.FIG.4Cis a diagram illustrating a flattened list of sub-items of updated scene entities, according to an embodiment. The sub-items in all of the updated scene entities inFIG.4Bare extracted and stored in a flattened list of sub-items as illustrated inFIG.4C. That is, the sub-items in the flattened list are not hierarchically structured. In one or more embodiments, the sub-items of the updated scene entities are sequentially ordered in the flattened list. After storing the sub-entities in a flattened list, a task may be performed in the units of sub-items instead of using the scene entities as units.

Because processing of each sub-items by a prioritization task takes approximately the same amount of time, overly conservative estimation of processing time can be reduced or removed, leading to processing of more sub-entities within the constrained time compared to using the scene entity as a unit for performing the prioritization task. Moreover, the sub-items in the flattened list ofFIG.4Cmay be stored in closer memory space of the primary memory206compared to hierarchically structured scene entities (e.g., as illustrated inFIG.4A). Accordingly, a series of sub-items in the flattened list ofFIG.4Cmay be accessed faster from the primary memory206compared to using the hierarchical data structure.

In one or more embodiments, a C-index may be used to identify the next sub-item to be processed by the prioritization task. As a sub-item is processed, the C-index moves to the next sub-item in the list ofFIG.4C. In this way, the sub-items for processing may be tracked over multiple priority task cycles. A priority task cycle refers to a cycle of performing part of the prioritization task and is to be terminated within a timing constraint. Typically, the timing constraint of the priority task cycle is governed by the frame rate for the computer game.

When all of the sub-items in the flattened list cannot be processed within a priority task cycle, a subset of the sub-items in the flattened list is processed by the prioritization task and the rest of the sub-items may be processed in subsequent priority task cycles. InFIG.4C, F-index indicates the first sub-item that was processed in the current priority task cycle. Specifically, mesh 1, texture 1 and mesh 2 were processed in a prior cycle, and the current cycle started with processing of texture 2. After the current priority task cycle is concluded, the F-index is updated to indicate the first sub-item to be processed in the next cycle. Hence, by tracking the F-index, the prioritization task may resume in the next priority task cycle without performing duplicative jobs. Processing of all sub-items in a flattened list may take a number of cycles and is referred to herein as a “pass” of the prioritization task.

The flattened list may be updated continuously as the operations312take place. For example, when a game character changes his outfit, a scene entity corresponding to the character may be updated to reflect the change in the outfit. As such change involves updating of mesh data or texture data corresponding the outfit, the scene entity of the character is added to the list of updated scene entities, and their sub-items are added to the flattened list. After processing of the sub-items by a prioritization task is finished, the sub-items remain in the flattened list until a removal operation is initiated to remove some of the sub-items. In one or more embodiment, the sub-items of the flattened list of the scene entities updated in a prior frame are removed before new sub-items for scene entities to be updated in a next frame are received.

Example Architecture of Scene Resource Collector

FIG.5is a block diagram of the scene resource collector348, according to an embodiment. The scene resource collector348receives a list512of updated scene entities as a result of the operations312, generates a flattened list522of sub-items of the updated scene entities, and generates priority information Pino 1, PInfo 2 indicating priority for rendering sub-items in a scene. For this purpose, the scene resource collector348may include, among other components, an updated scene entity list512, a collector module518, a flattened sub-item list522, and a rendering heuristics module526.

When one or more operations312result in the updating of a scene entity, the updated scene entity is stored in the updated scene entity list512. In the example ofFIG.5, the operations resulted in updating of scene entities SE1, SE3 SE(Z-13), SE(Z-12), SE(Z-10), SE(Z-5), SE(Z-3) and SEZ. The remaining scene entities are not updated, although the sub-items of these scene entities may still reside in the flattened sub-item list522as a result of prior operations.

The collector module518is a software module for processing updated scene entities to extract sub-items for storing in the flattened sub-item list522. When multithreading or multiprocessing is enabled in client device140, multiple collector jobs may be performed in parallel to process multiple updated scene entities at a time. As shown in the example ofFIG.5,3collector jobs are executed to extract sub-items from up to 3 scene entities at a time.

The time for extracting sub-items may be time constrained. Hence, when there are a large number of updated scene entities, only a subset of the updated scene entities may be processed in a collector cycle and remaining scene entities may be processed at subsequent collector cycles. A collector cycle refers to a time-constrained process of traversing the subset of updated scene entities and extracting their sub-entities. The collector cycle is also governed by the frame rate of the computer game, and may be the same or different length as a prioritization task cycle.

A scene entity index may be used to indicate the next scene entity to be fetched and processed by the collector module518. In the example ofFIG.5, scene entities in hatch patterned boxes indicate scene entities that are already processed or are currently being processed. Hence, the scene entity index points to SE(Z-5) for the next scene entity to be processed by the collector module518. After a scene entity in the updated scene entity list512is processed, the processed scene entity may be removed from the updated scene entity list512or retained until a new set of updated scene entities are received as a result of an operation.

Each of the collector jobs 1, 2, 3 reads an updated scene entity, traverses the hierarchy of the updated scene entity, extracts sub-items from the updated scene entity, and appends the extracted sub-items at the end of the flattened sub-item list522. In one or more embodiments, the collector module518processes the scene entities in a sequential order, which is a downward direction of the list inFIG.5.

Flattened sub-items list522stores sub-items that are extracted by the collector module518. Each entry of the flattened sub-items522includes data for a sub-item that may include, meta data and a pointer to a memory location of corresponding texture data or mesh data in the primary memory206or the second memory214. The meta data may include, among others, the number of levels of details available for the mesh data or texture data, and location information of the sub-item. The location information may be coordinates of virtual game space occupied by the sub-item or a coordinate of a polygon of the sub-item closest to a viewing camera location. In one or more embodiments, one or more entries of the sub-items may further store additional information associated with the prioritizing task performed in the previous pass. Such additional information may be a level of details determined in the prior pass of the prioritizing task. The level of details refers to a resolution level of texture data or mesh data (e.g., high resolution and low resolution). The resource storage318may store multiple versions of texture data or mesh data with different levels of details for the same sub-item. Texture data or mesh data with a higher level of details or resolution is likely to be used in rendering a scene entity or a sub-item of higher priority. Conversely, if the scene entity or the sub-item has lower priority, texture data or mesh data of a lower resolution is likely to be used to save memory and reduce computation load of the client device140. The additional information stored in the entry may be used to validate or expedite the next pass of the prioritizing task.

The rendering heuristics module526is a software module that performs a prioritizing task on the sub-items in the flattened sub-item list522. The rendering heuristics module526may perform multiprocessing or multithreading to perform multiple prioritizing jobs (e.g., P_Job 1 and P_Job 2) in parallel. The rendering heuristics module526may use the C-index as described above with reference toFIG.4Cto determine next sub-items for processing.

The rendering heuristics module526may receive and store one or more parameters530for performing the prioritization task. Such parameters may, for example, indicate a coordinate of the viewing camera location from which the scene is captured in the virtual environment. Whether to render a scene entity or its sub-items in a scene and/or the level of details of the scene entity or its sub-items may be determined based on, for example, the distance from the viewing camera location to the scene entity or the sub-items. Hence, the parameters530are received and used by prioritizing jobs to generate priority information (e.g., PInfo 1, PInfo 2) indicating the priority of the sub-items. If the priority of a sub-item as indicated by the priority information is higher, texture data or mesh data corresponding to the sub-item is given higher priority for transferring from the resource storage318to the scene resource storage356by the scene resource retriever352. Also, the level of details associated with the sub-item is likely to be higher as its priority increases.

After the scene resource retriever352receives the priority information of sub-items from the scene resource collector348, the scene resource retriever352executes an algorithm that takes into account, for example, available memory space in the scene resource storage356, data transfer bandwidth between the primary memory206and the secondary memory214, and graphics setting of the game to determine when the details in texture data or mesh data corresponding to a sub-item is to be retrieved and which version of texture data or mesh data is to be retrieved (e.g., high resolution or low resolution version).

The rendering heuristics module526may use various other factors to determine priority of a sub-item as indicated by the priority information. Such factors may include, for example, the importance of the scene entity or the sub-item as defined by a game developer or user, types of scene entities (e.g., character object or background object), and a current mode of the game (e.g., mode for replaying cutscenes, gameplay mode, or equipment preview mode).

The priority information or part of it may be stored in the entry of the sub-item in the flattened sub-item list522. The stored information may then be used, for example, in a subsequent pass to validate the accuracy of the priority computed in the subsequent pass.

Although only three collector jobs and two prioritization jobs are illustrated inFIG.5, this is merely for convenience of explanation. In practice, many more collector jobs and prioritization jobs may be used for processing the updated scene entities and their sub-items.

Example Process of Generating and Using Flattened List of Sub-Items

FIG.6is a flowchart illustrating a process for implementing the techniques described herein, according to an embodiment. A list of updated scene entities is received610at the scene resource collector and stored in its updated scene entity list512.

Sub-items of at least a subset of updated scene entities are extracted614in a collector cycle. When the number of updated scene entities or their sub-items is large, it may take multiple collector cycles to extract all the sub-items from the updated scene entities. A collector cycle may be set to terminate in a predetermine amount of time.

The extracted sub-items are then added618to the flattened list of sub-items. In one or more embodiments, newer extracted sub-items are appended at the end of the flattened list.

When additional scene entities are received by the scene resource collector or not all previously received scene entities were processed for extraction of their sub-items, the process proceeds622to handle remaining scene entities and/or newly received scene entities in the next collector cycle. The process of extracting the sub-items from the scene entities may be repeated over multiple collector cycles until there is no scene entity left for processing.

In a priority task cycle, a prioritization task is performed626on at least a subset of extracted sub-items in the flattened list. As a result, priority information on the subset of extracted sub-items are generated. As part of the prioritization task, a scheduling or planning operation may be performed to decide how many sub-items are to be processed in the current priority task cycle.

If there are remaining sub-items or additional sub-items for processing, the process proceeds630to perform the prioritization task on these remaining or additional sub-items in the next priority task cycle.

The generated priority information of each sub-item is then sent634to the scene resource retriever.

The steps and sequence of steps described above with reference toFIG.6are merely illustrative. For example, the process of extracting614sub-items and the process of performing626the prioritization task may be performed in parallel. Further, instead of extracting the sub-items from all the updated scene entities or performing the prioritization task on all extracted sub-items, some updated scene entities and/or sub-items may be discarded from processing.

Although embodiments described above are explained primarily in reference to a game system, the embodiments may be applied to other applications such as engineering software, navigational software, and educational software.

While particular embodiments and applications have been illustrated and described, it is to be understood that the invention is not limited to the precise construction and components disclosed herein and that various modifications, changes and variations which will be apparent to those skilled in the art may be made in the arrangement, operation and details of the method and apparatus disclosed herein without departing from the spirit and scope of the present disclosure.

Claims

  1. A method, comprising: receiving a plurality of updated scene entities each including plurality of sub-items associated with a mesh or a texture of respective scene entity, each of the plurality of updated scene entities representing a data object processed for rendering in a scene of a computer game and each of the sub-items representing a unit of data associated with a graphics operation and including information associated with the respective mesh or texture;storing a first updated scene entity of the plurality of updated scene entities in a primary memory separate from a secondary memory, wherein the first updated scene entity comprises a flattened list of a first plurality of sub-items each having a respective priority;executing multiple prioritization jobs in parallel, wherein priority information for the list resulting from the executing is based on the respective priorities of each of the first plurality of sub-items in the list;transferring data associated with each of the first plurality of sub-items as a unit to the primary memory from the secondary memory with a slower access speed and more memory space than the primary memory at a time based on the priority information;and rendering a frame of the scene including the first updated scene entity and at least a second updated scene entity of the plurality of updated scene entities having a second plurality of hierarchically structured sub-items, wherein an amount of the first plurality of sub-items processed during the rendering is more than an amount of the second plurality of sub-items processed during the rendering.
  1. The method of claim 1, wherein the time is further based on available memory space in the primary memory and the secondary memory, data transfer bandwidth between the primary memory and the secondary memory, and a graphics setting of the computer game.
  2. The method of claim 1, wherein the priority information is further based on an importance of the first updated scene entity as defined by a developer or a user of the computer game, a type of the first updated scene entity, and a current mode of the computer game.
  3. The method of claim 1, further comprising executing a plurality of collecting jobs in parallel for processing the plurality of updated scene entities.
  4. The method of claim 1, wherein processing the first plurality of sub-items and the second plurality of sub-items occurs in a priority task cycle executed within a predetermined amount of time.
  5. The method of claim 5, wherein the amount of the first plurality of sub-items and the amount of the second plurality of sub-items are processed in the priority task cycle.
  6. The method of claim 5, wherein any remaining sub-items in the flattened list that are not processed in the priority task cycle are processed in one or more subsequent priority task cycles.
  7. The method of claim 1, wherein a subset of the plurality of updated scene entities is processed in a collector cycle within a predetermined amount of time.
  8. The method of claim 8, wherein any remaining updated scene entities of the plurality of updated scene entities not processed in the collector cycle are processed in one or more subsequent collector cycles.
  9. The method of claim 1, wherein the respective priorities of the first plurality of sub-items are further based on respective distances of the first plurality of sub-items to a viewing camera location of the scene.
  10. The method of claim 1, further comprising storing at least part of the priority information in an entry of the flattened list.
  11. The method of claim 1, wherein at least one of the plurality of updated scene entities includes a plurality of sub-entities.
  12. The method of claim 1, wherein a level of details of the data associated with each of the first plurality of sub-items is based on the priority information.
  13. A non-transitory computer-readable storage medium storing instructions executable by a processor, the instructions when executed cause the processor to: receive a plurality of updated scene entities each including plurality of sub-items associated with a mesh or a texture of the respective scene entity, each of the plurality of updated scene entities representing a data object processed for rendering in a scene of a computer game and each of the sub-items representing a unit of data associated with a graphics operation and including information associated with the respective mesh or texture;store a first updated scene entity of the plurality of updated scene entities in a primary memory separate from a secondary memory, wherein the first updated scene entity comprises a flattened list of a first plurality of sub-items each having a respective priority;execute multiple prioritization jobs in parallel, wherein priority information for the list resulting from the execution is based on the respective priorities of each of the first plurality of sub-items in the list;transfer data associated with each of the first plurality of sub-items as a unit to the primary memory from the secondary memory with a slower access speed and more memory space than the primary memory at a time based on the priority information;and render a frame of the scene including the first updated scene entity and at least a second updated scene entity of the plurality of updated scene entities having a second plurality of hierarchically structured sub-items, wherein an amount of the first plurality of sub-items processed during the rendering is more than an amount of the second plurality of sub-items processed during the rendering.
  14. The non-transitory computer-readable storage medium of claim 14, wherein the time is further based on available memory space in the primary memory and the secondary memory, data transfer bandwidth between the primary memory and the secondary memory, and a graphics setting of the computer game.
  15. The non-transitory computer-readable storage medium of claim 14, wherein the instructions when executed further cause the processor to execute a plurality of collecting jobs in parallel for processing the plurality of updated scene entities.
  16. The non-transitory computer-readable storage medium of claim 14, wherein the instructions when executed further cause the processor to process the first plurality of sub-items and the second plurality of sub-items in a priority task cycle executed within a predetermined amount of time.
  17. The non-transitory computer-readable storage medium of claim 14, wherein the respective priorities of the first plurality of sub-items are further based on respective distances of the first plurality of sub-items to a viewing camera location of the scene.
  18. The non-transitory computer-readable storage medium of claim 14, wherein a level of details of the data associated with each of the first plurality of sub-items is based on the priority information.
  19. A system comprising: a processor;and a memory storing instructions executable by the processor, the instructions when executed cause the processor to: receive a plurality of updated scene entities each including plurality of sub-items associated with a mesh or a texture of the respective scene entity, each of the plurality of updated scene entities representing a data object processed for rendering in a scene of a computer game and each of the sub-items representing a unit of data associated with a graphics operation and including information associated with the respective mesh or texture;store a first updated scene entity of the plurality of updated scene entities in a primary memory separate from a secondary memory, wherein the first updated scene entity comprises a flattened list of a first plurality of sub-items each having a respective priority;execute multiple prioritization jobs in parallel, wherein priority information for the list resulting from the execution is based on the respective priorities of each of the first plurality of sub-items in the list;transfer data associated with each of the first plurality of sub-items as a unit to the primary memory from the secondary memory with a slower access speed and more memory space than the primary memory at a time based on the priority information;and render a frame of the scene including the first updated scene entity and at least a second updated scene entity of the plurality of updated scene entities having a second plurality of hierarchically structured sub-items, wherein an amount of the first plurality of sub-items processed during the rendering is more than an amount of the second plurality of sub-items processed during the rendering.

Disclaimer: Data collected from the USPTO and may be malformed, incomplete, and/or otherwise inaccurate.