U.S. Pat. No. 11,682,172

INTERACTIVE VIDEO GAME SYSTEM HAVING AN AUGMENTED VIRTUAL REPRESENTATION

AssigneeUniversal City Studios LLC

Issue DateFebruary 8, 2021

Illustrative Figure

Abstract

An interactive video game system includes at least one sensor and at least one display device disposed near a play area. The system also includes a controller communicatively coupled to the at least one sensor and the at least one display device, wherein the controller is configured to: receive, from the at least one sensor, the scanning data of the player in the play area; generate at least one model from the scanning data of the player; identify an action of the player in the play area based on the at least one model; generate the virtual representation for the player based on the at least one model and the action of the player; and present, on the display device, the virtual representation of the player in a virtual environment, wherein an action of the virtual representation is augmented relative to the action of the player.

Description

DETAILED DESCRIPTION As used herein, “scanning data” refers to two-dimensional (2D) or three-dimensional (3D) collected by sensing (e.g., measuring, imaging, ranging) visible outer surfaces of players in a play area. More specifically, “volumetric scanning data,” as used herein, refers to 3D scanning data, such as point cloud data, and may be contrasted with “2D scanning data,” such as image data. As used herein, a “player model” is a 2D or 3D model generated from the scanning data of a player that generally describes the outer surfaces of the player and may include texture data. More specifically, a “volumetric player model” or “volumetric model,” as used herein, refers to a 3D player model generated from volumetric scanning data of a player, and may be contrasted with a “2D player model” that is generated from 2D scanning data of a player. A “shadow model,” as used herein, refers to a texture-less volumetric model of a player generated from the scanning data of a player, either directly or by way of the player model. As such, when presented on a 2D surface, such as a display device, the shadow model of a player has a shape substantially similar to a shadow or silhouette of the player when illuminated from behind. A “skeletal model,” as used herein, refers to a 3D model generated from the scanning data of a player that defines predicted locations and positions of certain bones (e.g., bones associated with the arms, legs, head, spine) of a player to describe the location and pose of the player within a play area. As such, the skeletal model is used to determine the movements and actions of players in the play area to trigger events in a virtual environment and/or in the play area. Present embodiments are directed to an interactive video game ...

DETAILED DESCRIPTION

As used herein, “scanning data” refers to two-dimensional (2D) or three-dimensional (3D) collected by sensing (e.g., measuring, imaging, ranging) visible outer surfaces of players in a play area. More specifically, “volumetric scanning data,” as used herein, refers to 3D scanning data, such as point cloud data, and may be contrasted with “2D scanning data,” such as image data.

As used herein, a “player model” is a 2D or 3D model generated from the scanning data of a player that generally describes the outer surfaces of the player and may include texture data. More specifically, a “volumetric player model” or “volumetric model,” as used herein, refers to a 3D player model generated from volumetric scanning data of a player, and may be contrasted with a “2D player model” that is generated from 2D scanning data of a player.

A “shadow model,” as used herein, refers to a texture-less volumetric model of a player generated from the scanning data of a player, either directly or by way of the player model. As such, when presented on a 2D surface, such as a display device, the shadow model of a player has a shape substantially similar to a shadow or silhouette of the player when illuminated from behind.

A “skeletal model,” as used herein, refers to a 3D model generated from the scanning data of a player that defines predicted locations and positions of certain bones (e.g., bones associated with the arms, legs, head, spine) of a player to describe the location and pose of the player within a play area. As such, the skeletal model is used to determine the movements and actions of players in the play area to trigger events in a virtual environment and/or in the play area.

Present embodiments are directed to an interactive video game system that enables multiple players (e.g., up to 12) to perform actions in a physical play area to control virtual representations of the players in a displayed virtual environment. The disclosed interactive video game system includes one or more sensors (e.g., cameras, light sensors, infrared (IR) sensors) disposed around the play area to capture scanning data (e.g., 2D or volumetric scanning data) of the players. For example, certain embodiments of the disclosed interactive video game system includes an array having two or more volumetric sensors, such as depth cameras and Light Detection and Ranging (LIDAR) devices, capable of volumetrically scanning each of the players. The system includes suitable processing circuitry that generates models (e.g., player models, shadow models, skeletal models) for each player based on the scanning data collected by the one or more sensors, as discussed below. During game play, one or more sensors capture the actions of the players in the play area, and the system determines the nature of these actions based on the generated player models. Accordingly, the interactive video game system continuously updates the virtual representations of the players and the virtual environment based on the actions of the players and their corresponding in-game effects.

As mentioned, the disclosed interactive video game system includes one or more sensors arranged around the play area to monitor the actions of the players within the play area. For example, in certain embodiments, an array including multiple sensors may be used to generally ensure that a skeletal model of each player can be accurately generated and updated throughout game play despite potential occlusion from the perspective of one or more sensors of the array. In other embodiments, fewer sensors may be used (e.g., a single camera), and the data may be processed using a machine-learning algorithm that generates complete skeletal models for the players despite potential occlusion. For such embodiments, the machine learning agent may be trained in advance using a corpus of scanning data in which the actual skeletal models of players are known (e.g., manually identified by a human, identified using another skeletal tracking algorithm) while portions of one or more players are occluded. As such, after training, the machine learning agent may then be capable of generating skeletal models of players from scanning data despite potential occlusion.

Additionally, the processing circuitry of the system may use the scanning data to generate aspects (e.g., size, shape, outline) of the virtual representations of each player within the virtual environment. In certain embodiments, certain aspects (e.g., color, texture, scale) of the virtual representation of each player may be further adjusted or modified based on information associated with the player. As discussed below, this information may include information related to game play (e.g., items acquired, achievements unlocked), as well as other information regarding activities of the player outside of the game (e.g., player performance in other games, items purchased by the player, locations visited by the player). Furthermore, the scanning data collected by the sensors can be used by the processing circuitry of the game system to generate additional content, such as souvenir images in which a player model is illustrated as being within the virtual world.

Furthermore, the processing circuitry of the system may use the scanning data to augment movements of the virtual representations of each player. For example, in certain embodiments, the processing circuitry of the system may use the scanning data to generate a skeletal model indicating that a player is moving or posing in a particular manner. In response, the processing circuitry may augment the virtual representation of the player to enable the virtual representation to move or change in a manner that goes beyond the actual movement or pose of the player, such that the motion and or appearance of the virtual representation is augmented or enhanced. For example, in an embodiment in which a virtual representation, such as a particular video game character, has particular enhanced abilities (e.g., an ability to jump extremely high, an ability to swim extremely fast, an ability to fly), then certain player movements or poses (e.g., a small hopping motion, a swim stroke through the air, a flapping motion) may be detected and may trigger these enhanced abilities in the virtual representations of the players. Accordingly, the disclosed interactive video game system enables immersive and engaging experience for multiple simultaneous players.

With the foregoing in mind,FIG.1is a schematic diagram of an embodiment of an interactive video game system10that enables multiple players12(e.g., players12A and12B) to control respective virtual representations14(e.g., virtual representations14A and14B), respectively, by performing actions in a play area16. It may be noted that while, for simplicity, the present description is directed to two players12using the interactive video game system10, in other embodiments, the interactive video game system10can support more than two (e.g., 6, 8, 10, 12, or more) players12. The play area16of the interactive video game system10illustrated inFIG.1is described herein as being a 3D play area16A. The term “3D play area” is used herein to refer to a play area16having a width (corresponding to an x-axis18), a height (corresponding to a y-axis20), and depth (corresponding to a z-axis22), wherein the system10generally monitors the movements each of players12along the x-axis18, y-axis20, and z-axis22. The interactive video game system10updates the location of the virtual representations14presented on a display device24along an x-axis26, a y-axis28, and z-axis30in a virtual environment32in response to the players12moving throughout the play area16. While the 3D play area16A is illustrated as being substantially circular, in other embodiments, the 3D play area16A may be square shaped, rectangular, hexagonal, octagonal, or any other suitable 3D shape.

The embodiment of the interactive video game system10illustrated inFIG.1includes a primary controller34, having memory circuitry33and processing circuitry35, that generally provides control signals to control operation of the system10. As such, the primary controller34is communicatively coupled to an array36of sensing units38disposed around the 3D play area16A. More specifically, the array36of sensing units38may be described as symmetrically distributed around a perimeter of the play area16. In certain embodiments, at least a portion of the array36of sensing units38may be positioned above the play area16(e.g., suspended from a ceiling or on elevated platforms or stands) and pointed at a downward angle to image the play area16. In other embodiments, at least a portion of the array36of sensing units38may be positioned near the floor of the play area16and pointed at an upward angle to image the play area16. In certain embodiments, the array36of the interactive video game system10may include at least two at least two sensing units38per player (e.g., players12A and12B) in the play area16. Accordingly, in certain embodiments, the array36of sensing units38is suitably positioned to image a substantial portion of potential vantage points around the play area16to reduce or eliminate potential player occlusion. However, as mentioned above, and other embodiments, the array36may include fewer sensing units38(e.g., a single sensing unit), and the processing circuitry35may rely on a machine learning agent to deal with potential occlusion situations.

In the illustrated embodiment, each sensing unit38includes a respective sensor40, which may be a volumetric sensor (e.g., an infra-red (IR) depth camera, a LIDAR device, or another suitable ranging device) or a 2D imaging device (e.g. an optical camera). For example, in certain embodiments, all of the sensors40of the sensing units38in the array36are either IR depth cameras or LIDAR devices, while in other embodiments, a mixture of both IR depth cameras, LIDAR devices, and/or optical cameras are present within the array36. It is presently recognized that both IR depth cameras and LIDAR devices can be used to volumetrically scan each of the players12, and the collected volumetric scanning data can be used to generate various models of the players, as discussed below. For example, in certain embodiments, IR depth cameras in the array36may be used to collect data to generate skeletal models, while the data collected by LIDAR devices in the array36may be used to generate player models and/or shadow models for the players12, which is discussed in greater detail below. It is also recognized that LIDAR devices, which collect point cloud data, are generally capable of scanning and mapping a larger area than depth cameras, typically with better accuracy and resolutions. As such, in certain embodiments, at least one sensing unit38of the array36includes a corresponding volumetric sensor40that is a LIDAR device to enhance the accuracy or resolution of the array36and/or to reduce a total number of sensing units38present in the array36.

Further, each illustrated sensing unit38includes a sensor controller42having suitable memory circuitry44and processing circuitry46. The processing circuitry46of each sensing unit38executes instructions stored in the memory circuitry44to enable the sensing unit38to scan the players12to generate scanning data (e.g., volumetric and/or 2D scanning data) for each of the players12. For example, in the illustrated embodiment, the sensing units38are communicatively coupled to the primary controller34via a high-speed internet protocol (IP) network48that enables low-latency exchange of data between the devices of the interactive video game system10. Additionally, in certain embodiments, the sensing units38may each include a respective housing that packages the sensor controller42together with the sensor40.

It may be noted that, in other embodiments, the sensing units38may not include a respective sensor controller42. For such embodiments, the processing circuitry35of the primary controller34, or other suitable processing circuitry of the system10, is communicatively coupled to the respective sensors40of the array36to provide control signals directly to, and to receive data signals directly from, the sensors40. However, it is presently recognized that processing (e.g., filtering, skeletal mapping) the volumetric scanning data collected by each of these sensors40can be processor-intensive. As such, in certain embodiments, it can be advantageous to divide the workload by utilizing dedicated processors (e.g., processors46of each of the sensor controllers42) to process the scanning data collected by the respective sensor40, and then to send processed data to the primary controller34. For example, in the illustrated embodiment, each of processors46of the sensor controllers42process the scanning data collected by their respective sensor40to generate partial models (e.g., partial volumetric or 2D models, partial skeletal models, partial shadow models) of each of the players12, and the processing circuitry35of the primary controller34receives and fuses or combines the partial models to generate complete models of each of the players12, as discussed below.

Additionally, in certain embodiments, the primary controller34may also receive information from other sensing devices in and around the play area16. For example, the illustrated primary controller34is communicatively coupled to a radio-frequency (RF) sensor45disposed near (e.g., above, below, adjacent to) the 3D play area16A. The illustrated RF sensor45receives a uniquely identifying RF signal from a wearable device47, such as a bracelet or headband having a radio-frequency identification (RFID) tag worn by each of the players12. In response, the RF sensor45provides signals to the primary controller34regarding the identity and the relative positions of the players12in the play area16. As such, for the illustrated embodiment, processing circuitry35of the primary controller34receives and combines the data collected by the array36, and potentially other sensors (e.g., RF sensor45), to determine the identities, locations, and actions of the players12in the play area16during game play. Additionally, the illustrated primary controller34is communicatively coupled to a database system50, or any other suitable data repository storing player information. The database system50includes processing circuitry52that executes instructions stored in memory circuitry54to store and retrieve information associated with the players12, such as various models (e.g., player, shadow, and/or skeletal models) associated with the player, player statistics (e.g., wins, losses, points, total game play time), player attributes or inventory (e.g., abilities, textures, items), player purchases at a gift shop, player points in a loyalty rewards program, and so forth. The processing circuitry35of the primary controller34may query, retrieve, and update information stored by the database system50related to the players12to enable the system10to operate as set forth herein.

Additionally, the embodiment of the interactive video game system10illustrated inFIG.1includes an output controller56that is communicatively coupled to the primary controller34. The output controller56generally includes processing circuitry58that executes instructions stored in memory circuitry60to control the output of stimuli (e.g., audio signals, video signals, lights, physical effects) that are observed and experienced by the players12in the play area16. As such, the illustrated output controller56is communicatively coupled to audio devices62and display device24to provide suitable control signals to operate these devices to provide particular output. In other embodiments, the output controller56may be coupled to any suitable number of audio and/or display devices. The display device24may be any suitable display device, such as a projector and screen, a flat-screen display device, or an array of flat-screen display devices, which is arranged and designed to provide a suitable view of the virtual environment32to the players12in the play area16. In certain embodiments, the audio devices62may be arranged into an array about the play area16to increase player immersion during game play. For example, in certain embodiments, each of the audio devices62(e.g., each speaker) in such an array is independently controllable by the primary controller34to enable each of the players12to hear different sounds relative to the other players and unique to their own actions. In still other embodiments, the play area16may include robotic elements (e.g., androids, robotic animals, and so forth) that may be actuated in the real world in response to signals provided by the output controller based on the actions of the players during gameplay. For example, in addition or an alternative to the virtual representation of the player presented on the display device24, robotic representations of players12provide non-virtual representations that are controlled responsive to the movements and behavior of the players12in the play area16. In other embodiments, the system10may not include the output controller56, and the processing circuitry35of the primary controller34may be communicatively coupled to the audio devices62, display device24, and so forth, to generate the various stimuli for the players12in the play area16to observe and experience.

FIG.2is a schematic diagram of another embodiment of the interactive video game system10, which enables multiple players12(e.g., player12A and12B) to control virtual representations14(e.g., virtual representations14A and14B) by performing actions in the play area16. The embodiment of the interactive video game system10illustrated inFIG.2includes many of the features discussed herein with respect toFIG.1, including the primary controller34, the array36of sensing units38, the output controller56, and the display device24. However, the embodiment of the interactive video game system10illustrated inFIG.2is described herein as having a 2D play area16B. The term “2D play area” is used herein to refer to a play area16having a width (corresponding to the x-axis18) and a height (corresponding to the y-axis20), wherein the system10generally monitors the movements each of players12along the x-axis18and y-axis20. For the embodiment illustrated inFIG.2, the players12A and12B are respectively assigned sections70A and70B of the 2D play area16B, and the players12do not wander outside of their respective assigned sections during game play. However, it may be appreciated that other embodiments of the interactive video game system10may include a sufficient number of sensors (e.g. LIDAR sensors, or other suitable sensors40, located above the players) that each of the players12can be continuously tracked as they freely move around the entire play area16B while the system accounts for potential occlusion by other players as they move. The interactive video game system10updates the location of the virtual representations14presented on the display device24along the x-axis26and the y-axis28in the virtual environment32in response to the players12moving (e.g., running along the x-axis18, jumping along the y-axis20) within the 2D play area16B. As mentioned, in certain embodiments, the array36may include fewer sensors (e.g., a single camera).

Additionally, the embodiment of the interactive video game system10illustrated inFIG.2includes an interface panel74that can enable enhanced player interactions. As illustrated inFIG.2, the interface panel74includes a number of input devices76(e.g., cranks, wheels, buttons, sliders, blocks) that are designed to receive input from the players12during game play. As such, the illustrated interface panel74is communicatively coupled to the primary controller34to provide signals to the controller34indicative of how the players12are manipulating the input devices76during game play. The illustrated interface panel74also includes a number of output devices78(e.g., audio output devices, visual output devices, physical stimulation devices) that are designed to provide audio, visual, and/or physical stimuli to the players12during game play. As such, the illustrated interface panel74is communicatively coupled to the output controller56to receive control signals and to provide suitable stimuli to the players12in the play area16in response to suitable signals from the primary controller34. For example, the output devices78may include audio devices, such as speakers, horns, sirens, and so forth. Output devices78may also include visual devices such as lights or display devices of the interface panel74.

In certain embodiments, the output devices78of the interface panel74include physical effect devices, such as an electronically controlled release valve coupled to a compressed air line, which provides burst of warm or cold air or mist in response to a suitable control signal from the primary controller34or the output controller56. It may be appreciated that the output devices are not limited to those incorporated into the interface panel74. In certain embodiments, the play area16may include output devices that provide physical effects to players indirectly, such as through the air. For example, in an embodiment, when a player strikes a particular pose to trigger an ability or an action of the virtual representation, then the player may experience a corresponding physical effect. By way of specific example, in an embodiment in which a player has the ability to throw snowballs, the player may receive a cold blast of air on their exposed palm in response to the player extending their hands in a particular manner. In an embodiment in which a player has the ability to throw fireballs, the player may receive a warm blast of air or IR irradiation (e.g., heat) in response to the player extending their hands in a particular manner. In still other embodiments, players may receive haptic feedback (e.g., ultrasonic haptic feedback) in response to the virtual representation of a player interacting with an object in the virtual world. For example, when the virtual representation of the player hits a wall with a punch in the virtual environment, the player may receive some physically perceptible effect on a portion of their body (e.g., an extended first) that corresponds to the activity in the virtual environment.

As illustrated inFIG.2, the array36of sensing units38disposed around the 2D play area16B of the illustrated embodiment of the interactive video game system10includes at least one sensing unit38. That is, while certain embodiments of the interactive video game system10illustrated inFIG.1include the array36having at least two sensing units38per player, embodiments of the interactive video game system10illustrated inFIG.2include the array36having as few one sensing unit38regardless of the number of players. In certain embodiments, the array36may include at least two sensing units disposed at right angles (90°) with respect to the players12in the 2D play area16B. In certain embodiments, the array36may additionally or alternatively include at least two sensing units disposed on opposite sides (180°) with respect to the players12in the play area16B. By way of specific example, in certain embodiments, the array36may include only two sensing units38disposed on different (e.g., opposite) sides of the players12in the 2D play area16B.

As mentioned, the array36illustrated inFIGS.1and2is capable of collecting scanning data (e.g., volumetric or 2D scanning data) for each of the players12in the play area16. In certain embodiments, the collected scanning data can be used to generate various models (e.g., player, shadow, skeletal) for each players, and these models can be subsequently updated based on the movements of the players during game play, as discussed below. However, it is presently recognized that using volumetric models that include texture data is substantially more processor intensive (e.g., involves additional filtering, additional data processing) than using shadow models that lack this texture data. For example, in certain embodiments, the processing circuitry35of the primary controller34can generate a shadow model for each of the players12from scanning data (e.g., 2D scanning data) collected via the array36by using edge detection techniques to differentiate between the edges of the players12and their surroundings in the play area16. It is presently recognized that such edge detection techniques are substantially less processor-intensive and involve substantially less filtering than using a volumetric model that includes texture data. As such, it is presently recognized that certain embodiments of the interactive video game system10generate and update shadow models instead of volumetric models that include texture, enabling a reduction in the size, complexity, and cost of the processing circuitry35of the primary controller34. Additionally, as discussed below, the processing circuitry35can generate the virtual representations14of the players12based, at least in part, on the generated shadow models.

As mentioned, the scanning data collected by the array36of the interactive video game system10can be used to generate various models (e.g., a 2D or volumetric player model, a shadow model, a skeletal model) for each player. For example,FIG.3is a diagram illustrating skeletal models80(e.g., skeletal models80A and80B) and shadow models82(e.g., shadow models82A and82B) representative of players in the 3D play area16A.FIG.3also illustrates corresponding virtual representations14(e.g., virtual representations14A and14B) of these players presented in the virtual environment32on the display device24, in accordance with the present technique. As illustrated, the represented players are located at different positions within the 3D play area16A of the interactive video game system10during game play, as indicated by the locations of the skeletal models80and the shadow models82. The illustrated virtual representations14of the players in the virtual environment32are generated, at least in part, based on the shadow models82of the players. As the players move within the 3D play area16A, as mentioned above, the primary controller34tracks these movements and accordingly generates updated skeletal models80and shadow models82, as well as the virtual representations14of each player.

Additionally, embodiments of the interactive video game system10having the 3D play area16A, as illustrated inFIGS.1and3, enable player movement and tracking along the z-axis22and translates it to movement of the virtual representations14along the z-axis30. As illustrated inFIG.3, this enables the player represented by the skeletal model80A and shadow model82A and to move a front edge84of the 3D play area16A, and results in the corresponding virtual representation14A being presented at a relatively deeper point or level86along the z-axis30in the virtual environment32. This also enables the player represented by skeletal model80B and the shadow model82B to move to a back edge88of the 3D play area16A, which results in the corresponding virtual representation14B being presented at a substantially shallower point or level90along the z-axis30in the virtual environment32. Further, for the illustrated embodiment, the size of the presented virtual representations14is modified based on the position of the players along the z-axis22in the 3D play area16A. That is, the virtual representation14A positioned relatively deeper along the z-axis30in the virtual environment32is presented as being substantially smaller than the virtual representation14B positioned at a shallower depth or layer along the z-axis30in the virtual environment32.

It may be noted that, for embodiments of the interactive video game system10having the 3D player area16A, as represented inFIGS.1and3, the virtual representations14may only be able to interact with virtual objects that are positioned at a similar depth along the z-axis30in the virtual environment32. For example, for the embodiment illustrated inFIG.3, the virtual representation14A is capable of interacting with a virtual object92that is positioned deeper along the z-axis30in the virtual environment32, while the virtual representation14B is capable of interacting with another virtual object94that is positioned a relatively shallower depth along the z-axis30in the virtual environment32. That is, the virtual representation14A is not able to interact with the virtual object94unless that player represented by the models80A and82A changes position along the z-axis22in the 3D play area16A, such that the virtual representation14A moves to a similar depth as the virtual object94in the virtual environment32.

For comparison,FIG.4is a diagram illustrating an example of skeletal models80(e.g., skeletal models80A and80B) and shadow models82(e.g., shadow models82A and82B) representative of players in the 2D play area16B.FIG.4also illustrates virtual representations14(e.g., virtual representations14A and14B) of the players presented on the display device24. As the players move within the 2D play area16B, as mentioned above, the primary controller34tracks these movements and accordingly updates the skeletal models80, the shadow models82, and the virtual representations14of each player. As mentioned, embodiments of the interactive video game system10having the 2D play area16B illustrated inFIGS.2and4do not track player movement along a z-axis (e.g., z-axis22illustrated inFIGS.1and3). Instead, for embodiments with the 2D play area16B, the size of the presented virtual representations14may be modified based on a status or condition of the players inside and/or outside of game play. For example, inFIG.4, the virtual representation14A is substantially larger than the virtual representation14B. In certain embodiments, the size of the virtual representations14A and14B may be enhanced or exaggerated in response to the virtual representation14A or14B interacting with a particular item, such as in response to the virtual representation14A obtaining a power-up during a current or previous round of game play. In other embodiments, the exaggerated size of the virtual representation14A, as well as other modifications of the virtual representations (e.g., texture, color, transparency, items worn or carried by the virtual representation), may be the result of the corresponding player interacting with objects or items outside of the interactive video game system10, as discussed below.

It is presently recognized that embodiments of the interactive video game system10that utilize a 2D play area16B, as represented inFIGS.2and4, enable particular advantages over embodiments of the interactive video game system10that utilize the 3D play area16A, as illustrated inFIG.1. For example, as mentioned, the array36of sensing units38in the interactive video game system10having the 2D play area16B, as illustrated inFIG.2, includes fewer sensing units38than the interactive video game system10with the 3D play area16A, as illustrated inFIG.1. That is, depth (e.g., location and movement along the z-axis22, as illustrated inFIG.1) is not tracked for the interactive video game system10having the 2D play area16B, as represented inFIGS.2and4. Additionally, since players12A and12B remain in their respective assigned sections70A and70B of the 2D play area16B, the potential for occlusion is substantially reduced. For example, by having players remain within their assigned sections70of the 2D play area16B occlusion between players only occurs predictably along the x-axis18. As such, by using the 2D play area16B, the embodiment of the interactive video game system10illustrated inFIG.2enables the use of a smaller array36having fewer sensing units38to track the players12, compared to the embodiment of the interactive video game system10ofFIG.1.

Accordingly, it is recognized that the smaller array36of sensing units38used by embodiments of the interactive video game system10having the 2D play area16B also generate considerably less data to be processed than embodiments having the 3D play area16A. For example, because occlusion between players12is significantly more limited and predictable in the 2D play area16B ofFIGS.2and4, fewer sensing units38can be used in the array36while still covering a substantial portion of potential vantage points around the play area16. As such, for embodiments of the interactive video game system10having the 2D play area16B, the processing circuitry35of the primary controller34may be smaller, simpler, and/or more energy efficient, relative to the processing circuitry35of the primary controller34for embodiments of the interactive video game system10having the 3D play area16A.

As mentioned, the interactive video game system10is capable of generating various models of the players12. More specifically, in certain embodiments, the processing circuitry35of the primary controller34is configured to receive partial model data (e.g., partial player, shadow, and/or skeletal models) from the various sensing units38of the array36and fuse the partial models into complete models (e.g., complete volumetric, shadow, and/or skeletal models) for each of the players12. Set forth below is an example in which the processing circuitry35of the primary controller34fuses partial skeletal models received from the various sensing units38of the array36. It may be appreciated that, in certain embodiments, the processing circuitry35of the primary controller34may use a similar process to fuse partial shadow model data into a shadow model and/or to fuse partial volumetric model data.

In an example, partial skeletal models are generated by each sensing unit38of the interactive video game system10and are subsequently fused by the processing circuitry35of the primary controller34. In particular, the processing circuitry35may perform a one-to-one mapping of corresponding bones of each of the players12in each of the partial skeletal models generated by different sensing units38positioned at different angles (e.g., opposite sides, perpendicular) relative to the play area16. In certain embodiments, relatively small differences between the partial skeletal models generated by different sensing units38may be averaged when fused by the processing circuitry35to provide smoothing and prevent jerky movements of the virtual representations14. Additionally, when a partial skeletal model generated by a particular sensing unit differs significantly from the partial skeletal models generated by at least two other sensing units, the processing circuitry35of the primary controller34may determine the data to be erroneous and, therefore, not include the data in the skeletal models80. For example, if a particular partial skeletal model is missing a bone that is present in the other partial skeletal models, then the processing circuitry35may determine that the missing bone is likely the result of occlusion, and may discard all or some of the partial skeletal model in response.

It may be noted that precise coordination of the components of the interactive video game system10is desirable to provide smooth and responsive movements of the virtual representations14in the virtual environment32. In particular, to properly fuse the partial models (e.g., partial skeletal, volumetric, and/or shadow models) generated by the sensing units38, the processing circuitry35may consider the time at which each of the partial models is generated by the sensing units38. In certain embodiments, the interactive video game system10may include a system clock100, as illustrated inFIGS.1and2, which is used to synchronize operations within the system10. For example, the system clock100may be a component of the primary controller34or another suitable electronic device that is capable of generating a time signal that is broadcast over the network48of the interactive video game system10. In certain embodiments, various devices coupled to the network48may receive and use a time signal to adjust respective clocks at particular times (e.g., at the start of game play), and the devices may subsequently include timing data based on signals from these respective clocks when providing game play data to the primary controller34. In other embodiments, the various devices coupled to the network48continually receive the time signal from the system clock100(e.g., at regular microsecond intervals) throughout game play, and the devices subsequently include timing data from the time signal when providing data (e.g., volumetric scanning data, partial model data) to the primary controller34. Additionally, the processing circuitry35of the primary controller34can determine whether a partial model (e.g., a partial volumetric, shadow, or skeletal model) generated by a sensing unit38is sufficiently fresh (e.g., recent, contemporary with other data) to be used to generate or update the complete model, or if the data should be discarded as stale. Accordingly, in certain embodiments, the system clock100enables the processing circuitry35to properly fuse the partial models generated by the various sensing units38into suitable volumetric, shadow, and/or skeletal models of the players12.

FIG.5is a flow diagram illustrating an embodiment of a process110for operating the interactive video game system10, in accordance with the present technique. It may be appreciated that, in other embodiments, certain steps of the illustrated process110may be performed in a different order, repeated multiple times, or skipped altogether, in accordance with the present disclosure. The process110illustrated inFIG.5may be executed by the processing circuitry35of the primary controller34alone, or in combination with other suitable processing circuitry (e.g., processing circuitry46,52, and/or58) of the system10.

The illustrated embodiment of the process110begins with the interactive video game system10collecting (block112) a scanning data for each player. In certain embodiments, as illustrated inFIGS.1-4, the players12may be scanned or imaged by the sensing units38positioned around the play area16. For example, in certain embodiments, before game play begins, the players12may be prompted to strike a particular pose, while the sensing units38of the array36collect scanning data (e.g., volumetric and/or 2D scanning data) regarding each player. In other embodiments, the players12may be volumetrically scanned by a separate system prior to entering the play area16. For example, a line of waiting players may be directed through a pre-scanning system (e.g., similar to a security scanner at an airport) in which each player is individually scanned (e.g., while striking a particular pose) to collect the scanning data for each player. In certain embodiments, the pre-scanning system may be a smaller version of the 3D play area16A illustrated inFIG.1or the 2D play area16B inFIG.2, in which an array36including one or more sensing units38are positioned about an individual player to collect the scanning data. In other embodiments, the pre-scanning system may include fewer sensing units38(e.g., 1, 2, 3) positioned around the individual player, and the sensing units38are rotated around the player to collect the complete scanning data. It is presently recognized that it may be desirable to collect the scanning data indicated in block112while the players12are in the play area16to enhance the efficiency of the interactive video game system10and to reduce player wait times.

Next, the interactive video game system10generates (block114) corresponding models for each player based on the scanning data collected for each player. As set forth above, in certain embodiments, the processing circuitry35of the primary controller may receive partial models for each of the players from each of the sensing units38in the array36, and may suitably fuse these partial models to generate suitable models for each of the players. For example, the processing circuitry35of the primary controller34may generate a player model (e.g., a volumetric or 2D player model) for each player that generally defines a 2D or 3D shape of each player. Additionally or alternatively, the processing circuitry35of the primary controller34may generate a shadow model for each player that generally defines a texture-less 3D shape of each player. Furthermore, the processing circuitry35may also generate a skeletal model that generally defines predicted skeletal positions and locations of each player within the play area.

Continuing through the example process110, next, the interactive video game system10generates (block116) a corresponding virtual representation for each player based, at least in part on, the on the scanning data collected for each player and/or one or more the models generated for each player. For example, in certain embodiments, the processing circuitry35of the primary controller34may use a shadow model generated in block114as a basis to generate a virtual representation of a player. It may be appreciated that, in certain embodiments, the virtual representations14may have a shape or outline that is substantially similar to the shadow model of the corresponding player, as illustrated inFIGS.3and4. In addition to shape, the virtual representations14may have other properties that can be modified to correspond to properties of the represented player. For example, a player may be associated with various properties (e.g., items, statuses, scores, statistics) that reflect their performance in other game systems, their purchases in a gift shop, their membership to a loyalty program, and so forth. Accordingly, properties (e.g., size, color, texture, animations, presence of virtual items) of the virtual representation may be set in response to the various properties associated with the corresponding player, and further modified based on changes to the properties of the player during game play. Also, a corresponding virtual representation for a player may be based only partially on the scanning data and/or the shadow model generated for the player, such that the virtual representation includes enhanced and/or modified visual characteristics relative to the actual appearance of the player. For example, in an embodiment, for a player in a seated position (e.g., seated in a chair or a wheel chair), a virtual representation may be generated in which an upper portion of the virtual representation includes a realistic, silhouette (e.g., based on the shadow model of the player), while a lower portion of the virtual representation is illustrated in an alternative or abstract manner (e.g., as a floating cloud). In another embodiment, the upper portion of the body of the virtual representation includes a realistic, silhouette (e.g., based on the shadow model of the player), while a lower portion of the body of the virtual representation is illustrated as that of a horse, yielding a centaur-like virtual representation. In such embodiments, the lower horse portion of the virtual representation may move like a horse in a manner that corresponds to (e.g., is synchronized with) movement of the feet of the player in a directly correlated or an augmented fashion, as discussed in greater detail below.

It may be noted that, in certain embodiments, the virtual representations14of the players12may not have an appearance or shape that substantially resembles the generated player or shadow models. For example, in certain embodiments, the interactive video game system10may include or be communicatively coupled to a pre-generated library of virtual representations that are based on fictitious characters (e.g., avatars), and the system may select particular virtual representations, or provide recommendations of particular selectable virtual representations, for a player generally based on the generated player or shadow model of the player. For example, if the game involves a larger hero and a smaller sidekick, the interactive video game system10may select or recommend from the pre-generated library a relatively larger hero virtual representation for an adult player and a relatively smaller sidekick virtual representation for a child player.

The process110continues with the interactive video game system10presenting (block118) the corresponding virtual representations14of each of the players in the virtual environment32on the display device24. In addition to presenting, in certain embodiments, the actions in block118may also include presenting other introductory presentations, such as a welcome message or orientation/instructional information, to the players12in the play area16before game play begins. Furthermore, in certain embodiments, the processing circuitry35of the primary controller34may also provide suitable signals to set or modify parameters of the environment within the play area16. For example, these modifications may include adjusting house light brightness and/or color, playing game music or game sound effects, adjusting the temperature of the play area, activating physical effects in the play area, and so forth.

Once game play begins, the virtual representations14generated in block116and presented in block118are capable of interacting with one another and/or with virtual objects (e.g., virtual objects92and94) in the virtual environment32, as discussed herein with respect toFIGS.3and4. During game play, the interactive video game system10generally determines (block120) the in-game actions of each of the players12in the play area16and the corresponding in-game effects of these in-game actions. Additionally, the interactive video game system10generally updates (block122) the corresponding virtual representations14of the players12and/or the virtual environment32based on the in-game actions of the players12in the play area16and the corresponding in-game effects determined in block120. As indicated by the arrow124, the interactive video game system10may repeat the steps indicated in block120and122until game play is complete, for example, due to one of the players12winning the round of game play or due to an expiration of an allotted game play time.

FIG.6is a flow diagram that illustrates an example embodiment of a more detailed process130by which the interactive video game system10performs the actions indicated in blocks120and122ofFIG.5. That is, the process130indicated inFIG.6includes a number of steps to determine the in-game actions of each player in the play area and the corresponding in-game effects of these in-game actions, as indicated by the bracket120, as well as a number of steps to update the corresponding virtual representation of each player and/or the virtual environment, as indicated by the bracket122. In certain embodiments, the actions described in the process130may be encoded as instructions in a suitable memory, such as the memory circuitry33of the primary controller34, and executed by a suitable processor, such as the processing circuitry35of the primary controller34, of the interactive video game system10. It should be noted that the illustrated process130is merely provided as an example, and that in other embodiments, certain actions described may be performed in different orders, may be repeated, or may be skipped altogether.

The process130ofFIG.6begins with the processing circuitry35receiving (block132) partial models from a plurality of sensing units in the play area. As discussed herein with respect toFIGS.1and2, the interactive video game system10includes the array36of sensing units38disposed in different positions around the play area16, and each of these sensing units38is configured to generate one or more partial models (e.g. partial player, shadow, and/or skeletal models) for at least a portion of the players12. Additionally, as mentioned, the processing circuitry35may also receive data from other devices (e.g., RF sensor45, input devices76) regarding the actions of the players16disposed within the play area16. Further, as mentioned, these partial models may be timestamped based on a signal from the clock100and provided to the processing circuitry35of the primary controller34via the high-speed IP network48.

For the illustrated embodiment of the process130, after receiving the partial models from the sensing units38, the processing circuitry35fuses the partial models to generate (block134) updated models (e.g., player, shadow, and/or skeletal) for each player based on the received partial models. For example, the processing circuitry35may update a previously generated model, such as an initial skeletal model generated in block114of the process110ofFIG.5. Additionally, as discussed, when combining the partial models, the processing circuitry35may filter or remove data that is inconsistent or delayed to improve accuracy when tracking players despite potential occlusion or network delays.

Next, the illustrated process130continues with the processing circuitry35identifying (block136) one or more in-game actions of the corresponding virtual representations14of each of the players12based, at least in part, on the updated models of the players generated in block134. For example, the in-game actions may include jumping, running, sliding, or otherwise moving of the virtual representations14within the virtual environment32. In-game actions may also include interacting with (e.g., moving, obtaining, losing, consuming) an item, such as a virtual object in the virtual environment32. In-game actions may also include completing a goal, defeating another player, winning a round, or other similar in-game actions.

Next, the processing circuitry35may determine (block138) one or more in-game effects triggered in response to the identified in-game actions of each of the players12. For example, when the determined in-game action is a movement of a player, then the in-game effect may be a corresponding change in position of the corresponding virtual representation within the virtual environment. When the determined in-game action is a jump, the in-game effect may include moving the virtual representation along the y-axis20, as illustrated inFIGS.1-4. When the determined in-game action is activating a particular power-up item, then the in-game effect may include modifying a status (e.g., a health status, a power status) associated with the players12. Additionally, in certain cases, the movements of the virtual representations14may be accentuated or augmented relative to the actual movements of the players12. For example, as discussed above with respect to modifying the appearance of the virtual representation, the movements of a virtual representation of a player may be temporarily or permanently exaggerated (e.g., able to jump higher, able to jump farther) relative to the actual movements of the player based on properties associated with the player, including items acquired during game play, items acquired during other game play sessions, items purchased in a gift shop, and so forth.

The illustrated process130continues with the processing circuitry35generally updating the presentation to the players in the play area16based on the in-game actions of each player and the corresponding in-game effects, as indicated by bracket122. In particular, the processing circuitry35updates (block140) the corresponding virtual representations14of each of the players12and the virtual environment32based on the updated models (e.g., shadow and skeletal models) of each player generated in block134, the in-game actions identified in block136, and/or the in-game effects determined in block138, to advance game play. For example, for the embodiments illustrated inFIGS.1and2, the processing circuitry35may provide suitable signals to the output controller56, such that the processing circuitry58of the output controller56updates the virtual representations14and the virtual environment32presented on the display device24.

Additionally, the processing circuitry35may provide suitable signals to generate (block142) one or more sounds and/or one or more physical effects (block144) in the play area16based, at least in part, on the determined in-game effects. For example, when the in-game effect is determined to be a particular virtual representation of a player crashing into a virtual pool, the primary controller34may cause the output controller56to signal the speakers62to generate suitable splashing sounds and/or physical effects devices78to generate a blast of mist. Additionally, sounds and/or physical effects may be produced in response to any number of in-game effects, including, for example, gaining a power-up, losing a power-up, scoring a point, or moving through particular types of environments. Mentioned with respect toFIG.5, the process130ofFIG.6may repeat until game play is complete, as indicated by the arrow124.

Furthermore, it may be noted that the interactive video game system10can also enable other functionality using the scanning data collected by the array36of sensing units38. For example, as mentioned, in certain embodiments, the processing circuitry35of the primary controller34may generate a player model (e.g., a volumetric or 2D player model) that that includes both the texture and the shape of each player. At the conclusion of game play, the processing circuitry35of the primary controller34can generate simulated images that use the models of the players to render a 2D or 3D likeness of the player within a portion of the virtual environment32, and these can be provided (e.g., printed, electronically transferred) to the players12as souvenirs of their game play experience. For example, this may include a print of a simulated image illustrating the volumetric model of a player crossing a finish line within a scene from the virtual environment32.

FIGS.7-13illustrate example embodiments of the interactive video game system10that enable the generation of virtual representations having augmented appearance and/or movements relative to those of the player. For these example embodiments, while only a single player is illustrated for simplicity, it is envisioned that these interactive video game systems may be simultaneously used by any suitable number of players (e.g., 12 players), as discussed above. Additionally, while not illustrated for simplicity, the example interactive video game systems illustrated inFIGS.7-13include any suitable features (e.g., sensors, controllers, display devices, physical effects devices, and so forth) mentioned herein to enable operation of the system10, as discussed above.

With the foregoing in mind, in certain embodiments, virtual representations may be modified to appear and/or move differently from the corresponding players. That is, in certain embodiments, a virtual representation associated with a particular player may be able to transform or move in ways that do not directly correspond to (e.g., are not exactly the same as) the appearance or movement of the players. In certain embodiments, the virtual representations are not restricted by real world physical limitations imposed on the appearance or movement of the players, and, therefore, may be described as being associated with super human abilities. For example, in certain embodiments, virtual representations may include characters having greater-than-normal or super human abilities, such as characters that can jump higher or stretch farther than a realistic human can. In other embodiments, these super human abilities may include other super speed, super strength, size-altering abilities (e.g., to shrink and grow), abilities to shoot projectiles from various body parts (e.g., laser shooting eyes or hands, throwing fire or ice), and so forth. Accordingly, when players are in control of such virtual representations, then particular actual or real-world movements by the players trigger (e.g., a translated into) these super-human abilities of the virtual representations. By way of further example, and certain embodiments, the virtual representations may be representations of non-human entities. For example, in certain embodiments, the virtual representations may be animal-based representations of the players, wherein these representations have abilities (e.g., modes or styles of movement) that are distinct from, and/or augmented relative to, those of ordinary humans.

In one example illustrated inFIG.7, a player12is positioned within the play area16during gameplay of the interactive video game system10, while portions of the virtual environment32are presented on the display device24. More specifically, the virtual environment32includes the virtual representation14that represents the player12. As such, for the illustrated example, the virtual representation14has an appearance that generally resembles the appearance of the player12, based on the scanning data and various models discussed above. However, unlike other examples discussed above, the virtual representation14illustrated inFIG.7demonstrates augmented physical movement relative to the detected movements of the player12.

For the example illustrated inFIG.7, the player12is illustrated as jumping a modest distance from the floor of the play area16, while the virtual representation14is illustrated as performing a substantially larger jumping motion relative to a floor of the virtual environment32. As such, the virtual representation14demonstrates an augmented (e.g., enhanced, exaggerated) jumping ability that is beyond that of a normal human (e.g., a super human jumping ability). In certain cases, the augmented jumping ability may be performed by the virtual representation14after acquiring a particular item (e.g., power-up) within the virtual environment32, and may be temporary or permanent after acquiring the item. In other embodiments, the augmented jumping ability may be a feature or aspect of a particular character (e.g., a fictional character from a video game, book, or movie) upon which the virtual representation14is based. For such embodiments, by selecting a character associated with an augmented jumping ability, the virtual representation14may demonstrate this augmented jumping ability throughout gameplay. It should be appreciated that the augmented jumping illustrated inFIG.7is just one example of augmented movements, and that in other embodiments, any other suitable type of player movement (e.g., running, hopping, spinning, dancing, and so forth) may be identified and augmented by the processing circuitry35of the primary controller34based on the scanning data and the models (e.g., the skeletal models) discussed above, in accordance with the present disclosure.

In certain embodiments, a virtual representation14may be associated with abilities that affect both the appearance and the movement of the virtual representation14in response to particular movements of the player12. For the example ofFIG.8, a player12is disposed in the play area16during gameplay of the interactive video game system10, while a corresponding virtual representation14that is associated with a size altering super human ability is presented on the display device24. In the particular example illustrated inFIG.8, the player12has dropped to a crouching pose during gameplay. This crouching pose, when detected by the processing circuitry35of the primary controller34in the scanning data and the one or more models discussed above, represents a special or control pose that triggers a particular augmented ability of the virtual representation14or the virtual environment32. It may be appreciated that, in other embodiments, other control poses may be used, in accordance with the present disclosure.

For the example illustrated inFIG.8, in response to detecting the player12in the control pose (e.g., the crouching pose), the size of the illustrated virtual representation14is dramatically decreased, effectively shrinking the virtual representation14within the virtual environment32. In certain embodiments, the virtual representation14may only maintain the reduced or diminished size while the player12remains in a crouching position. In other embodiments, once the controller34determines that the player12has taken the crouching control pose, the player12may stand erect again without the virtual representation14returning to its previous or original size. For such embodiments, the size of the virtual representation14may remain diminished until the primary controller34determines that the player12has assumed a second control pose (e.g., standing with arms and legs extended in a general “X” shape), triggering the enlargement of the virtual representation14. In this manner, upon detecting one or more control poses, the primary controller34may trigger one or more special abilities or super powers that are either temporarily or permanently associated with the virtual representation14.

It may be appreciated that, for the example illustrated inFIG.8, the modified appearance of the virtual representation14may also be associated with differences in the movements and/or abilities of the virtual representation14within the virtual environment32. For example, in certain situations, the smaller sized virtual representation14may demonstrate augmented (e.g., enhanced, exaggerated) movements relative to the detected movements of the player12. That is, in certain situations, the smaller sized virtual representation14may continue to jump as high and run as fast as the player12despite its diminutive size. In other cases, the movement of the smaller sized virtual representation14may be reduced or lessened relative to the detected motion of the player12until the virtual representation14is restored to full size. In certain embodiments, the smaller sized virtual representation may demonstrate enhanced effects relative to features within the virtual environment32. For example, the smaller sized virtual representation14may be more easily displaced or affected by a wind or current moving in the virtual environment, or may gain entry to locations in the virtual environment32that would be inaccessible to the larger sized virtual representation14.

For the example ofFIG.9, a player12is disposed in the play area16during gameplay of the interactive video game system10, while a corresponding virtual representation14associated with a super human stretching ability is presented on the display device24. In the particular example illustrated inFIG.9, the player12is extending or stretching arms from their sides during gameplay. For the illustrated embodiment, the virtual representation14that is associated with a super stretching ability may be based on a character selection by the player12at the beginning of gameplay, or may be based on a particular item (e.g., a super stretching power-up) obtained by the virtual representation14in the virtual environment32during gameplay.

For the embodiment illustrated inFIG.9, in response to the primary controller34determining that the player12is extending their arms to a maximum extent, the processing circuitry35modifies both the appearance and movement of the virtual representation14, such that the arms of the virtual representation14extend in an augmented (e.g., enhanced, exaggerated) manner. It may be appreciated that, this may enable the virtual representation14to perform particular tasks in the virtual environment32. In other situations, the virtual representation14may stretch in different manners (e.g., from the legs, from the torso, from the neck) based on other movements or poses of the player12during gameplay. This augmented stretching ability may generally enable the virtual representation to access elements (e.g., items, weapons, entrances/exits, enemies, allies) that would be otherwise inaccessible in the virtual environment32, providing the player12with an engaging and creative problem solving experience. Additionally, while a super stretching ability is illustrated inFIG.9, in other embodiments, other enhanced abilities, such as super speed, super strength, and so forth, may also be implemented, in accordance with the present disclosure.

In certain embodiments, rather than exactly reproducing the appearance and movements of the player, a virtual representation may appear and move like a real or fictitious non-human entity, such as an animal virtual representation. In certain embodiments, a player may select a particular animal-based virtual representation at the beginning of gameplay, while in other embodiments, the animal-based virtual representation may be assigned automatically based on scanning data and/or models associated with the player. In certain embodiments, once selected or assigned, the virtual representation may remain the same throughout gameplay, while in other embodiments, the virtual representation may change periodically, or in response to particular movements or achievements of the player (e.g., different animal representations for different terrains in the virtual environment or different levels). When the virtual representation takes the form of a particular animal, then the virtual representation may have particular types of abilities (e.g., types of movement) that are different from those of the player12, including some that may be difficult or impossible for the player12to actually perform (e.g., trotting like a horse, hopping like a kangaroo, swimming like a fish, flying like a bird, and so forth). As such, the appearance and movements detected by the primary controller34may be augmented (e.g., exaggerated, enhanced), such that the player12can use feasible, realistic human poses and movements within the play area16that are augmented to generate movements of the animal-based virtual representation14.

FIG.10illustrates an example embodiment in which a player12is disposed in the play area16during gameplay of the interactive video game system10, while a corresponding animal-based virtual representation14capable of leaping movement is presented on the display device24. For the illustrated embodiment, the virtual representation14is a stag virtual representation14that may be either selected by the player12at the beginning of gameplay, or selected by the processing circuitry35of the primary controller34based on the scanning data and/or models associated with the player12. For example, in one embodiment, the stag virtual representation14was selected by the primary controller34for the player12upon detecting that the player12has “ponytails” or “pigtails” that remotely resemble the antlers of a stag. For such embodiments, the virtual representation14may include one or more features or characteristics that generally correspond to augmented (e.g., enhanced, exaggerated) features of the player12in a manner similar to a caricature art.

For the example illustrated inFIG.10, the primary controller34detects the player12is skipping across the play area16during gameplay based on the scanning data and the one or more models. The primary controller34translates detected movements of the player12into suitable movement for the stag virtual representation14. In particular, the primary controller34augments (e.g., enhances, exaggerates) the detected movements of the player12, such that the stag virtual representation14leaps in the virtual environment32to a height and/or distance that is greater than what is detected, and potentially greater than what may be possible, for the player12. Additionally, as mentioned above, the augmented moving (e.g., jumping, leaping, bounding) ability demonstrated by the virtual representation14may be used by the player12to achieve particular objectives in the virtual environment32.

In certain embodiments, one or more real world figures (e.g., robotic elements, animatronic devices) may be part of the interactive video game system10. For example, in certain embodiments, in addition or alternative to the virtual representation14, the interactive video game system10may include a robotic representation, such as a robotic stag representation. Like the stag virtual representation14discussed above, the robotic stag is controlled by the primary controller34based on detected the movements of the player12, and the controller34may augment (e.g., enhance, exaggerate) the detected movements of the player12when determining how to move the robotic stag representation. Additionally, in certain embodiments, the interactive video game system10may include other robotic elements, such as the illustrated robotic rabbit150and robotic squirrel152. In certain embodiments, the movements of these additional robotic elements150,152may be controlled based on the movements of other players in the play area16. In other embodiments, these additional robotic elements150,152may move in response to things occurring in the virtual environment32, the movement of a robotic or virtual representation14, or a combination thereof, to provide a more immersive experience that includes the movement of 3D, real world figures.

FIG.11illustrates another example of an animal-based virtual representation (e.g., an aquatic type animal) that enables augmented movements (e.g., swimming movements) relative to the detected movement of the players. In the illustrated example, a player12is moving through the play area16in an undulating fashion (e.g., walking, doing lunges). In response to detecting this movement, the primary controller34moves a dolphin virtual representation14in a corresponding undulating manner that is augmented (e.g., exaggerated, enhanced) relative to the movement of the player12. Other player movements may be detected and translated into movements of the dolphin virtual representation14. For example, the primary controller34may translate a detected jumping movement of the player12into a significant breach jump above a surface of a body of water in the virtual environment32, or translate a detected swimming motion of the arms of the player12into a tail flicking motion of the dolphin virtual representation14. For one or more of these movements, the primary controller34may augment (e.g., exaggerate, enhance) the movements of the dolphin virtual representation14relative to the actual detected movements of the player12.

FIG.12illustrates another example of an animal-based virtual representation that enables augmented flying movement relative to the detected movement of the players. In the illustrated embodiment, a player12is positioned in the play area16such that the primary controller34detects and determine movements of the player12during gameplay of the interactive video game system10. For the illustrated example, the player12controls a bat virtual representation14, such that when the player poses or moves in a particular manner, these are translated and augmented into movements of the wings of the bat.

For the example illustrated inFIG.11, in response to detecting the player12with their arms extended, the primary controller34may cause the wings of the bat virtual representation to extend. Further, in response to detecting the player12leaning left or right, the primary controller34may cause the bat virtual representation14to lean and steer to the left or right in a corresponding manner. Additionally, in certain embodiments, the player12may flap their arms to cause the bat virtual representation14to flap to gain altitude, and the player12may also tuck in their arms to cause the bat to dive. Furthermore, for the illustrated embodiment, the display device24includes a number of screens154(e.g., screen154A,154B, and154C), which may enable the bat virtual representation14to fly from screen to screen around at least a portion of a perimeter of the play area16. It may be appreciated that while examples of stag, dolphin, and bat virtual representations14is discussed with respect toFIGS.10-12, the same technique may be applied to other animals having other ability and/or forms of movement. For example, it is envisioned that this technique may be applied to enable a player to pose and move in unique ways to control an animal-based virtual representation (e.g., to jump like a kangaroo, dig like a meerkat, to slither like a snake, and so forth). It may also be appreciated that this technique may also be applied to fictitious entities and animals, such that the disclosed system enables augmented movement of virtual representations14to, for example, gallop like a unicorn or fly like a dragon, based on the detected positions and movements of the players.

FIG.13illustrates an embodiment of the interactive video game system10that includes enhanced physical effects that are associated with the augmented abilities of a virtual representation14. For the illustrated embodiment, a player12is positioned in the play area16near (e.g., below) two physical effects devices156and158. In other embodiments, the physical effects devices156and158may be located above the play area16, integrated within the floor of the play area16, integrated within an interface panel74(as illustrated inFIG.2), or otherwise situated to direct physical effects toward players in the play area16. Physical effects device156is a thermal physical effects device that is designed to provide a thermal effects (e.g., infrared (IR) light, blasts of cold/hot air, blast of warm or cool mist) to the player12that correspond to events occurring within the virtual environment32. In contrast, physical effects device158is an ultrasonic haptic device that is capable of using ultrasonic waves to provide a physical sensation of touch to the player12through the air, wherein the physical sensation corresponds to events occurring within the virtual environment32.

More specifically, for the example illustrated inFIG.13, the virtual representation14has received a fire-related power-up and is touching a barrier160. That is, in the illustrated example, a first hand162of the virtual representation14is associated with the fire power-up, and a fire or sun symbol164is positioned near the first hand162of the virtual representation14. Accordingly, the thermal physical effect device156may be an IR source (e.g., an IR lamp) that is activated by the primary controller34and directed toward a first hand166of the player12. Additionally, a second hand168of the virtual representation14is illustrated as being in contact with the barrier160in the virtual environment32. As such, the ultrasonic haptic physical effect device158is activated by the primary controller34and directed toward a second hand170of the player12. Accordingly, the player12experiences a more immersive experience by feeling physical effects that are based on events and situations occurring in the virtual environment32. In this manner, the interactive video game system10may provide enhanced feedback to the player12that extend one or more aspects of augmented movements and abilities into the real-world experience of the player12.

The technical effects of the present approach includes an interactive video game system that enables multiple players (e.g., two or more, four or more) to perform actions in a physical play area (e.g., a 2D or 3D play area) to control corresponding virtual representations in a virtual environment presented on a display device near the play area. The disclosed system includes a plurality of sensors and suitable processing circuitry configured to collect scanning data and generate various models, such as player models, shadow models, and/or skeletal models, for each player. The system generates the virtual representations of each player based, at least in in part, on a generated player models. Additionally, the interactive video game system may set or modify properties, such as size, texture, and/or color, of the of the virtual representations based on various properties, such as points, purchases, power-ups, associated with the players. Moreover, the interactive video game system enables augmented movements (e.g., super human abilities, animal-based movements) that are enhanced or exaggerated relative to the actual detected movements of the players in the play area. Further, embodiments of the interactive video game system may include robotic devices and/or physical effects devices that provide feedback relative to these augmented movements and abilities, to provide an immersive gameplay experience to the players.

While only certain features of the present technique have been illustrated and described herein, many modifications and changes will occur to those skilled in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the true spirit of the present technique. Additionally, the techniques presented and claimed herein are referenced and applied to material objects and concrete examples of a practical nature that demonstrably improve the present technical field and, as such, are not abstract, intangible or purely theoretical. Further, if any claims appended to the end of this specification contain one or more elements designated as “means for [perform]ing [a function] . . . ” or “step for [perform]ing [a function] . . . ”, it is intended that such elements are to be interpreted under 35 U.S.C. 112(f). However, for any claims containing elements designated in any other manner, it is intended that such elements are not to be interpreted under 35 U.S.C. 112(f).

Claims

  1. An interactive amusement system, comprising: a plurality of sensing units, wherein each sensing unit of the plurality of sensing units is configured to collect respective scanning data of a person positioned in an interaction area and to generate a respective partial model of the person by respective processing circuitry of the sensing unit;a video display disposed near the interaction area and configured to present a virtual representation associated with the person;and a controller communicatively coupled to the plurality of sensing units and the video display, wherein the controller is configured to: receive the respective partial model of the person from each sensing unit of the plurality of sensing units;fuse each received partial model to generate a model of the person;identify an action of the person based on the model;generate the virtual representation of the person based on the model and the identified action;and present, on the video display, the generated virtual representation of the person in a virtual environment performing augmented motions that correlate with the identified action of the person and that are exaggerated in accordance with the model.
  1. The interactive amusement system of claim 1, wherein the generated virtual representation of the person is a silhouette of the person.
  2. The interactive amusement system of claim 1, wherein the model includes a shadow model and the controller is configured to generate the virtual representation based on the shadow model.
  3. The interactive amusement system of claim 1, wherein the model includes a skeletal model of the person, and wherein the controller is configured to identify the action of the person based on the skeletal model of the person.
  4. The interactive amusement system of claim 1, wherein the controller is configured to generate a virtual object and present the virtual object on the video display based on the identified action of the person.
  5. The interactive amusement system of claim 5, wherein the identified action of the person is a throwing motion and the virtual object is a thrown object.
  6. The interactive amusement system of claim 6, wherein the augmented motions correspond to the throwing motion and exaggerated virtual action of the thrown object.
  7. The interactive amusement system of claim 1, wherein the generated virtual representation depicts a super human ability that is triggered by the identified action of the person.
  8. The interactive amusement system of claim 1, comprising a movable object model configured to generate a virtual moving object.
  9. The interactive amusement system of claim 9, wherein the controller is configured present the virtual moving object as moving on the video display in coordination with the generated virtual representation of the person based on the movable object model and the model.
  10. A method of operating an interactive amusement system, the method comprising: receiving, via processing circuitry of a controller, a respective partial model of a person from each of a plurality of sensing units, wherein each sensing unit of the plurality of sensing units is configured to collect respective scanning data of the person and generate the respective partial model of the person by respective processing circuitry of the sensing unit;fusing, via the processing circuitry of the controller, each received partial model to generate a model of the person;identifying, via the processing circuitry of the controller, an action of the person based on the model;generating, via the processing circuitry of the controller, a virtual representation of the person based on the model and the action;and presenting, via a video display that is viewable by the person, the generated virtual representation, in a virtual environment, performing a virtual action that corresponds to and is augmented relative to the action of the person.
  11. The method of claim 11, comprising identifying a physical item held or worn by the person.
  12. The method of claim 12, wherein the virtual action is based on a trait of the physical item.
  13. The method of claim 11, comprising generating the virtual representation of the person as a silhouette.
  14. The method of claim 11, comprising generating, via the processing circuitry of the controller, a virtual moving object in coordination with the virtual representation of the person.
  15. The method of claim 15, wherein the virtual moving object is presented based on the virtual action.
  16. An interactive amusement system, comprising a controller configured to: receive a respective partial model of a person from each sensing unit of a plurality of sensing units, wherein each sensing unit of the plurality of sensing units is configured to collect respective scanning data of the person and generate the respective partial model of the person by respective processing circuitry of the sensing unit;fuse each received partial model to generate a model of the person;identify an action of the person based on the model of the person;generate a virtual representation of the person based on the model and the identified action;activate an augmented activity of the virtual representation based on the identified action, wherein the augmented activity includes a virtual exaggeration of the identified action;and present, on a video display, the virtual representation in a virtual environment performing the augmented activity.
  17. The interactive amusement system of claim 17, wherein the controller is configured to generate a virtual moving object based on the identified action, the augmented activity, or both.
  18. The interactive amusement system of claim 17, wherein the controller is configured to generate the virtual representation as a silhouette of the person.
  19. The interactive amusement system of claim 17, wherein the controller is configured to generate a virtual moving object and present the virtual moving object on the video display based on the identified action of the person corresponding to a throwing or kicking motion.

Disclaimer: Data collected from the USPTO and may be malformed, incomplete, and/or otherwise inaccurate.