U.S. Pat. No. 11,400,377

TECHNIQUES FOR VIDEO GAME INPUT COMPENSATION AND RELATED SYSTEMS AND METHODS

AssigneeHarmonix Music Systems, Inc.

Issue DateAugust 25, 2020

Illustrative Figure

Abstract

Described herein are techniques for improving user experience in a video game. In some embodiments, the techniques utilize one or more snapshots of the video game over time to adjust a pose of a user's input device in the video game. For example, the user's input device may be a spatially tracked controller which has a tracked pose in the video game. A user input may indicate a button pressed on the spatially tracked controller (e.g., triggering firing of a shot in the video game). In some embodiments, the techniques use the snapshot(s) of the video game that capture instances prior to the user input to adjust a pose of the user input device in the video game to provide an improved user experience.

Description

DETAILED DESCRIPTION The inventor has recognized and appreciated that various aspects of video game play, including aspects related to the video game platform and/or the player's interaction with the platform through input devices, can result in a poor game experience. How the computing device processes user input, potentially in combination with the user's physical interaction with an input device, may result in a poor game experience. For example, a user may intend to provide an input at a particular time during game play and/or in a particular manner with the input device, yet the video game platform processes the user input differently than the user intends. For example, when aiming a weapon in a video game, a common practice is for the user to line up their shot and to pull the trigger at the appropriate time according to game play. While video game platforms are designed to require user skill in order to obtain a successful or unsuccessful shot, the inventor has discovered and appreciated that unintended aspects of video game play may undesirably influence whether the shot is successful. For example, due to how the video game console processes user input, the video game scenario may change in the time between when the user lines up the shot and pulls the trigger. As another example, since spatially tracked controllers can require the user to press a button or to pull a trigger, the act of pulling the trigger can affect the user's aim, such as by unintentionally jerking the controller's pose (which can unintentionally change the user's direction of aim), causing the user to miss the shot. Therefore, while it can be desirable to require a certain level of skill to play video games, the inventor has discovered and appreciated that such timing issues and/or input device-based issues ...

DETAILED DESCRIPTION

The inventor has recognized and appreciated that various aspects of video game play, including aspects related to the video game platform and/or the player's interaction with the platform through input devices, can result in a poor game experience. How the computing device processes user input, potentially in combination with the user's physical interaction with an input device, may result in a poor game experience. For example, a user may intend to provide an input at a particular time during game play and/or in a particular manner with the input device, yet the video game platform processes the user input differently than the user intends. For example, when aiming a weapon in a video game, a common practice is for the user to line up their shot and to pull the trigger at the appropriate time according to game play. While video game platforms are designed to require user skill in order to obtain a successful or unsuccessful shot, the inventor has discovered and appreciated that unintended aspects of video game play may undesirably influence whether the shot is successful. For example, due to how the video game console processes user input, the video game scenario may change in the time between when the user lines up the shot and pulls the trigger. As another example, since spatially tracked controllers can require the user to press a button or to pull a trigger, the act of pulling the trigger can affect the user's aim, such as by unintentionally jerking the controller's pose (which can unintentionally change the user's direction of aim), causing the user to miss the shot.

Therefore, while it can be desirable to require a certain level of skill to play video games, the inventor has discovered and appreciated that such timing issues and/or input device-based issues can confuse players and feel unfair. Similarly, while spatially tracked input devices can improve the user experience by providing an additional level of enjoyment compared to non-spatially tracked controllers, due to the fact that such controllers are often lightweight, pressing buttons can undesirably change the pose of the controller and affect the user's game play. The inventor has also discovered and appreciated that it can be advantageous to augment a user's video game playing abilities, such as improving the player's performance of the game. A user's abilities can be augmented in order to increase a player's enjoyment of the video game. For example, a player's aim can be improved so that the player successfully shoots a target more frequently, which can result in a better user experience.

The inventor has developed improvements to existing video game technology that can enhance the user's gaming experience. The techniques can be used to provide realistic user experiences, in a manner that lines up with the user's perception of their video game play. For example, if a user perceives that their input should be successful (e.g., a successful shot, a successful maneuver, and/or the like), the techniques can be used to provide such a result to increase their enjoyment of the game. In some embodiments, the techniques adjust the user's input and/or other aspects associated with the input (e.g., aim), so that the user's input achieves a result more in-line with the user's perception of their game play. Such techniques can therefore give players the benefit of the doubt of their input, such as by increasing the chances that the user shoots at an intended location.

In some embodiments, rather than processing a user input according to the current time and/or input device characteristics, such as button presses, pose of a tracked controller, aim, etc.), the techniques can adjust the input based on snapshots of prior game play aspects. As an illustrative example, for a shooting-type feature, instead of simply firing at where the user is currently aiming at the time of the trigger pull, the techniques can track snapshots of aspects of video game play over time, such as the history of the user's aim at a target. The techniques can adjust the aim and/or timing of the trigger pull when processing the shot. Such approaches can help the player to shoot at a location that the user was aiming at immediately before pulling the trigger, since pulling the trigger may cause the location aimed at by the user to unintentionally change. In some embodiments, the current video game aspects associated with the input are also analyzed, in the event that the user's input at the current time (and not a historical time) results in the best game play.

In some embodiments, the techniques can adjust aspects of a user input based on a single historical snapshot. For example, the snapshots can be a snapshot that occurred a fixed time in the past (e.g., 15 milliseconds ago, 30 milliseconds ago, etc.) or a predetermined number of snapshots in the past. In some embodiments, the user input can be adjusted to use one or more aspects of the historical snapshot instead of the actual data associated with the user input. For example, to adjust for timing delays associated with a user input, the video game platform can use previous data from the snapshot, such as a previous aiming location, a previous pose of the input device (e.g., for tracked controllers), and/or the like.

In some embodiments, the techniques can adjust aspects of a user input based on a plurality of historical snapshots of game play. For example, the techniques can analyze the set of historical snapshots and adjust aspects of the user input based on one or a combination of the historical snapshots. In some embodiments, the techniques can determine a metric for each of the historical snapshots that is indicative of a success or failure of the user's input. For example, the metric can indicate as how close a shot would land near a target, how well the shot would score based on a heat map of the target, and/or the like. The video game platform can analyze the metrics of the historical snapshots and select one or more snapshots based on the metrics. For example, the techniques can select the historical snapshot with the best metric, select the two or more historical snapshots with the best metrics, and/or the like. The techniques can then adjust aspects of the game play based on the selected snapshot(s). In some embodiments, the video game platform can use aspects of a snapshot with the best metric instead of the aspects associated with the input. For example, the video game platform can use the aim of the snapshot with the best metric to fire from, instead of the aim associated with the button press or trigger pull. As another example, the techniques can adjust the aim associated with the input based on the aim of the snapshot with the best metric.

Following below are more detailed descriptions of various concepts related to, and embodiments of, techniques for player aiming assist. It should be appreciated that various aspects described herein may be implemented in any of numerous ways. Examples of specific implementations are provided herein for illustrative purposes only. In addition, the various aspects described in the embodiments below may be used alone or in any combination, and are not limited to the combinations explicitly described herein.

FIG. 1is a block diagram of a platform100in accordance with some embodiments. The platform100can include a computing device102. In some embodiments, the computing device102can be a dedicated game console, e.g., PLAYSTATION®3, PLAYSTATION®4, or PLAYSTATION®VITA manufactured by Sony Computer Entertainment, Inc.; WII™, WII U™, NINTENDO 2DS™, NINTENDO 3DS™, or NINTENDO SWITCH™ manufactured by Nintendo Co., Ltd.; or XBOX®, XBOX 360®, or XBOX ONE® manufactured by Microsoft Corp. In some embodiments, the computing device102can be a computer configured to run a virtual reality (VR) platform, such as those provided by Oculus, HTC, Sony, and/or the like, and discussed further herein. In other embodiments, the computing device102can be a general purpose desktop or laptop computer. In other embodiments, the computing device102can be a server connected to a computer network. In other embodiments, the computing device102can be user equipment. The user equipment can communicate with one or more radio access networks and/or with wired communication networks. The user equipment can be a cellular phone. The user equipment can also be a smartphone providing services such as word processing, web browsing, gaming, and/or the like. The user equipment can also be a tablet computer providing network access and most of the services provided by a smart phone. The user equipment can operate using an operating system such as Symbian OS, iPhone OS, RIM's Blackberry, Windows Mobile, Linux, HP WebOS, and Android. The screen might be a touch screen that is used to input data to the mobile device, in which case the screen can be used instead of the full keyboard. The user equipment can also keep global positioning coordinates, spatial positioning information (e.g., roll, pitch, yaw, etc.), profile information, or other location information.

The computing device102can include a memory device104, a processor106, a video rendering module108, and a device interface110. While connections between the components of the computing device102are not shown inFIG. 1for ease of illustration, it should be appreciated that the components can be interconnected in various ways to facilitate communication among the components.

The non-transitory104can maintain machine-readable instructions for execution on the processor106. In some embodiments, the memory104can take the form of volatile memory, such as Random Access Memory (RAM) or cache memory. In other embodiments, the memory104can take the form of non-volatile memory, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; or magnetic disks, e.g., internal hard disks or removable disks. In some embodiments, the memory104can include portable data storage devices, including, for example, magneto-optical disks, and CD-ROM and DVD-ROM disks.

The processor106can take the form of a programmable microprocessor executing machine-readable instructions, such as a computer processing unit (CPU). Alternatively, the processor106can be implemented at least in part by special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit) or other specialized circuit. The processor106can include a plurality of processing units, each of which may independently operate on an input data, such as a gradient vector. In some cases, the plurality of processing units may be configured to perform an identical operation on different data. For example, the plurality of processing units can be configured in a single-instruction-multiple-data (SIMD) architecture to operate on multiple data using a single instruction. In other cases, the plurality of processing units may be configured to perform different operations on different data. For example, the plurality of processing units can be configured in a multiple-instruction-multiple-data (MIMD) architecture to operate on multiple data using multiple instructions.

The processor106can be coupled with a device interface108. The device interface108can be implemented in hardware to send and receive signals in a variety of mediums, such as optical, copper, and wireless, and in a number of different protocols some of which may be non-transient.

The device interface108can be coupled with an external input device112. The external input device112can allow a player to interact with the computing device102. In some embodiments, the external input device112can include a game console controller, a mouse, a keyboard, or any other device that can provide communication with the computing device102. In some embodiments, the external input device112can be one or more spatially tracked controllers that are configured to work with a VR headset, such as the Oculus Rift, HTC Vive, Sony PlayStation VR, and/or the like. Examples of such spatially tracked controllers include motion controllers, wired gloves, 3D mice, and/or the like. For example, the spatially tracked controllers can be tracked using optical tracking systems, such as infrared cameras and/or the like.

In some embodiments, the processor106can be coupled to a video rendering module110. The video rendering module110can be configured to generate a video display on the external audio/visual device114based on instructions from processor106. While not shown, the computing device102can also include a sound synthesizer that can be configured to generate sounds accompanying the video display.

The external audio/visual device114can be a video device, an audio device, or an audio/video device, and can include one or more audio and/or video devices. In some embodiments, the one or more audio/video devices can include a display device, e.g., a CRT (cathode ray tube), LCD (liquid crystal display) monitor, or LED (light emitting diode) monitor, a television, an integrated display, e.g., the display of a PLAYSTATION®VITA or Nintendo 3DS, or other type of device capable of displaying video and accompanying audio sounds. In some embodiments, the external audio/visual device114is a VR headset, such as the Oculus Rift, HTC Vive, Sony PlayStation VR, and/or other VR headsets. Such VR headsets can include motion sensing devices, such as gyroscopes and/or other motion sensors that track the user's motion (e.g., the user's head, hand, or body). Such VR headsets can also include display screens. Such VR headsets can also include on board processors that are used to process motion data, display VR video, and perform other aspects of the VR environment.

WhileFIG. 1shows one connection into the one or more audio/video devices114, other embodiments two or more connections are also possible, such as a connection to a video device and a separate connection to an audio device (e.g., speakers or a headset). In some embodiments, one of the audio/video devices114can reside in a first system (e.g., a display system) and another one of the audio/video devices114can reside in second system (e.g., a sound system).

In some embodiments, one or more of the modules108,110, and/or other modules not shown inFIG. 1, can be implemented in software using the memory device104. The software can run on a processor106capable of executing computer instructions or computer code. The processor106is implemented in hardware using an application specific integrated circuit (ASIC), programmable logic array (PLA), digital signal processor (DSP), field programmable gate array (FPGA), or any other integrated circuit. The processor106suitable for the execution of a computer program includes, by way of example, both general and special purpose microprocessors, digital signal processors, and any one or more processors of any kind of digital computer. Generally, the processor106receives instructions and data from a read-only memory or a random access memory or both.

In some embodiments, one or more of the modules (e.g., modules108,110, and/or other modules) can be implemented in hardware using an ASIC (application-specific integrated circuit), PLA (programmable logic array), DSP (digital signal processor), FPGA (field programmable gate array), or other integrated circuit. In some embodiments, two or more modules can be implemented on the same integrated circuit, such as ASIC, PLA, DSP, or FPGA, thereby forming a system on chip. Subroutines can refer to portions of the computer program and/or the processor/special circuitry that implement one or more functions.

The various modules of the computing device102can be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in combinations of them. The implementation can be as a computer program product, e.g., a computer program tangibly embodied in a machine-readable storage device, for execution by, or to control the operation of, a data processing apparatus, e.g., a programmable processor, a computer, and/or multiple computers. A computer program can be written in any form of computer or programming language, including source code, compiled code, interpreted code and/or machine code, and the computer program can be deployed in any form, including as a stand-alone program or as a subroutine, element, or other unit suitable for use in a computing environment. A computer program can be deployed to be executed on one computer or on multiple computers at one or more sites.

While the modules108and110are depicted as separate modules outside of processor106(e.g., as stand-alone graphics cards or sound cards), other embodiments are also possible. For example, one or both modules can be implemented as specialized hardware blocks within processor106. Alternatively, one or more modules108and110can be implemented purely as software running within processor106.

Generally, the techniques provide for adjusting aspects of video game play based on historical game play data. In some embodiments, the techniques adjust aspects associated with a user input, such as the timing of a user input, an aim and/or location associated with the user's input, and/or the like, based on historical snapshots of those aspects over time. Some examples discussed herein provide for adjusting the pose associated with the input (e.g., a 2D position, a 3D position, a ray or vector that includes both a position and a direction, and/or the like) in order to adjust the user's aim for a shot. However, this is for exemplary purposes only, as the techniques can be used to adjust any aspect of game play that can be tracked over time, such as the timing of the input, the position of the characters, obstacles, terrain, and/or the like.

FIG. 2is a flow chart showing an exemplary computerized method200for determining an adjusted pose of an input device (e.g., aim) for an associated input (e.g., a button press or trigger pull), according to some embodiments. As described herein, the method200can be executed by a video game platform, such as by the video game platform100described in conjunction withFIG. 1. At step202, the video game platform can access data indicative of a set of snapshots of one or more aspects of a video game over time. Each snapshot can be associated with a timestamp that is indicative of when the aspects of the game play occurred. Each snapshot can include data for various game play aspects, including data indicative of a first pose associated with an input device, and a state of a video game simulation at a time of the snapshot. In some embodiments, the state of the video game simulation may be a second pose associated with a target. For example, the second pose can be the location of a target that the user is aiming at using the first pose of the input device. In some embodiments, the state of the video game simulation may be poses of multiple targets. For example, the poses can be locations of multiple targets that a user is aiming at using the first pose of the input device. In some embodiments, the state of the video game simulation may be a parameter indicating characteristics of an environment in the simulation. For example, the state of the video game simulation may be and/or include geometrical aspects (e.g., dimension(s), angle(s), etc.) of a terrain. In another example, the state of the video game simulation may be and/or include wind speed and/or turbulence in the environment of the simulation at the time of the snapshot. Some embodiments are not limited to indications of the state of the video game simulation described herein.

FIGS. 3-4are graphical depictions showing aspects of video game play that can be tracked by the snapshots.FIG. 3is a graphical depiction of an exemplary snapshot300of 3D video game aspects, according to some embodiments. The snapshot300includes a first pose302, which is a ray that includes a 3D position304of the input device (e.g., a position of a tracked VR controller) as well as a direction306that the controller is pointing (e.g., represented by a second 3D coordinate). The snapshot300also includes an indication of a state of a video game simulation at a time of the snapshot (e.g., a second pose)308. For example, the indication of the video game simulation may be a 3D position of the target. While the indication of the pose308inFIG. 3is shown as a 3D point, this is for exemplary purposes, as the target may include a 2D and/or 3D component and/or other indication. Examples of video game simulation states that may be indicated in a snapshot are described herein.

FIG. 4is a graphical depiction of an exemplary snapshot400of 2D video game aspects, according to some embodiments. The snapshot400includes a first pose402, which is a 2D position associated with the input device, such as a position in the video game indicating the user's aim at the time of the snapshot. The snapshot400also includes an indication of a state of a video game simulation (e.g., a second pose)404, which is a 2D position associated with the target. As noted in conjunction withFIG. 3, while the pose404of the target is shown as a 2D position, the target can comprise a 2D shape (e.g., which is related to the 2D position).

In some embodiments, the video game platform can be configured to store and/or access one or a plurality of snapshots. For example, the video game platform can be configured to store and/or access just a single snapshot, such as the snapshot that is associated with a timestamp within a predetermined time period before the time of the input, such as a snapshot that is 50 milliseconds, 100 milliseconds, etc. prior to the time of the input. As another example, the video game platform can be configured to store and/or access a single snapshot that is the snapshot that is a predetermined number of snapshots before the time of the input, such as five snapshots before the input, ten snapshots before the input, etc. In some embodiments, the video game platform can be configured to store and/or access a plurality of snapshots. For example, the video game console can be configured to store some or all of the snapshots that occur within a predetermined time period before the time of the input (e.g., 50 milliseconds, 100 milliseconds, etc.). As another example, the video game console can be configured to store some or all of the snapshots that are within a predetermined number of snapshots before the time of the input (e.g., 5 snapshots, 10 snapshots, etc.).

At step204, the video game platform receives data indicative of an input (e.g., a button press, trigger pull, etc.) from the input device at a time that occurs after the timestamps associated with at least a portion of the set of snapshots. The video game platform also receives data indicative of an initial pose associated with the input device for the input Like the data of gameplay aspects in the snapshots, the pose of the input device can be an associated 2D position in the video game and/or 3D position in the video game (e.g., representative of a position on the screen, as discussed in conjunction withFIG. 4). In some embodiments, the pose of the input device can be a ray that indicates a position of the input device in the video game space, as well as a direction that the input device is pointing in the video game space (e.g., as discussed in conjunction withFIG. 3). For example, the video game platform can determine such rays for tracked VR controllers.

At step206, the computing device determines an adjusted pose for the input based on a relationship for each snapshot that is determined based on the first pose associated with the input device and the state of the video game simulation (e.g., a second pose of the target). In some embodiments, as described herein the set of snapshots may only include one snapshot as described herein. The video game platform can determine an adjusted pose for the input by instead using the pose associated with the input device in the first snapshot and/or by adjusting the initial pose based on the pose in the first snapshot.

In some embodiments, as also described herein, the set of snapshots may include a plurality of snapshots. The techniques can include analyzing the plurality of snapshots to determine which snapshot(s) to use to determine the adjusted pose. In some embodiments, the video game platform can determine, for each snapshot, a metric based on the video game aspects in the snapshot. In some embodiments, the video game platform can determine the metric based on (1) the first pose associated with the input device and (2) a state of a video game simulation at a time of the snapshot. For example, the metric can be determined based on a pose associated with the input device and a pose associated with the target in the snapshot. In another example, the metric can be determined based on the pose associated with the input device and poses of multiple targets. In another example, the metric can be determined based on the pose associated with the input device and the geometry of a terrain in the video game simulation. In another example, the metric can be determined based on the pose associated with the input device and wind conditions (e.g., speed and turbulence) in the video game simulation. In some embodiments, a combination of multiple aspects of the state of the video game simulation as described herein can be used to determine the metric.

FIG. 5is a flowchart of an exemplary computerized method500for determining a metric for a set of snapshots, according to some embodiments. At step502, the video game platform accesses a set of a plurality of snapshots. At step504, the video game platform selects one of the snapshots from the set. At step506, the video game platform determines a metric for the selected snapshot based on the first pose associated with the controller and a state of a video game simulation (e.g., a second pose associated with the target) at the time of the snapshot. At step508, the video game platform determines whether there are more snapshots left. If yes, the method proceeds back to step504, otherwise the method proceeds to step510and selects one or more snapshots based on the metrics. At step512, the video game platform determines the adjusted pose based on the selected snapshot(s).

Referring to step506, the metric can reflect a fitness of different aspects of the snapshot, such as a fitness indicating how well the user is aiming at a target. Continuing with the example discussed inFIG. 2, the metric can reflect, for each snapshot, an accuracy of the first pose associated with the input device to the state of the video game simulation (e.g., the second pose of the target).FIGS. 6A-6Care diagrams illustrating exemplary metrics for snapshots of 3D game aspects, according to some embodiments.FIG. 6Agraphically illustrates a first metric602for a first snapshot600, according to some embodiments. The first pose604associated with the input device comprises a ray comprising a position and a direction. For illustrative purposes, the ray is extended using dotted arrow606to pictorially illustrate the user's aim in the 3D data. The second pose608is associated with the target and includes a 3D position, as shown. The metric602indicates a fitness of how well the ray604associated with the input device is pointing at the pose608of the target. As shown inFIG. 6A, the metric602is indicated with a line between the dotted line606(showing the user's aim in the 3D space) and the pose608. The metric can be indicative of, for example, a distance between the user's aim and the target608.

FIG. 6Bgraphically illustrates a second metric632for a second snapshot630, according to some embodiments. Similar to the first snapshot600, the snapshot630includes a first pose634associated with the input device, which is extended using dotted arrow636to pictorially illustrate the user's aim in the 3D data. The second snapshot630also includes a second pose638that is associated with the target. Compared to the first snapshot600, the metric632is larger than the metric602, which indicates that the second snapshot630has a worse metric than the first snapshot600.

FIG. 6Cgraphically illustrates a third metric662for a third snapshot660, according to some embodiments. Like the first and second snapshots600,630, the third snapshot660includes a first pose664associated with the input device, which is extended using dotted arrow666, and a second pose668that is associated with the target. Compared to the first snapshot600and the second snapshot630, the metric662is smaller than both metrics602and632, which indicates that the third snapshot660has the best metric of the three snapshots.

FIGS. 7A-7Bgraphically illustrate a metric computed for 3D data using a heat map, according to some embodiments. For example, in shooting games, the heat map can be used to determine the effectiveness of a shot for multiple shots that hit the target at different locations. For ease of illustration, the heat map is shown as a bullseye-type structure, where the center-most portion achieves a highest score, the middle portion achieves a middle score, and the outermost portion achieves a lowest score. A person of skill can appreciate that various shapes and/or configurations of heat maps can be used with different structures without departing from the spirit of the invention.

FIG. 7Agraphically illustrates a first metric702for a first snapshot700, according to some embodiments. The first pose704associated with the input device comprises a ray, which is extended using dotted arrow706to pictorially illustrate the user's aim in the 3D data. The second pose708is associated with the target and includes at heat map. The metric702indicates a fitness of how well the ray704associated with the input device is pointing at the target. In particular, as shown inFIG. 7A, the user's aim as indicated by the pose704would result in a shot of the outermost ring of the heat map708. The metric can be indicative of, for example, a successful shot of the lowest score, since shots within the middle and/or center portions of the heat map will score higher than shots in the outermost portion of the heat map.

FIG. 7Bgraphically illustrates a second metric752for a second snapshot750, according to some embodiments. Like the snapshot700, the second snapshot750includes a first pose754associated with the input device, which is extended using dotted arrow756, and a second pose758that is associated with the target heat map. Compared to the first snapshot700, the metric752will score higher than metric702, since metric752indicates the shot is within the center portion of the heat map, which will score better than the metric702. It should be appreciated that while the metrics702,752are illustrated as points in the heat map, the metric can be stored as a number representing the score of the shot based on the heat map, and/or can be represented in other manners without departing from the spirit of the techniques described herein.

WhileFIGS. 6A-6C and 7A-7Billustrate snapshots for 3D data (e.g., for VR applications), it should be appreciated that the techniques are not limited to 3D data and/or can be processed so that the metric is computed in a non-3D manner.FIGS. 8A-8Bgraphically illustrate a first example of metrics in a 2D space, according to some embodiments.FIG. 8Agraphically illustrates a first metric802for a first snapshot800, according to some embodiments. The first pose804associated with the input device and the second pose806associated with the target are both 2D positions. The metric802indicates an accuracy of the pose804associated with the input device pointing at the target indicated by the pose806. In particular, as shown inFIG. 7A, the user's aim as indicated by the pose804would result in a shot with a distance from the second pose806shown by the line of the metric802. The metric can be indicative of, for example, a distance between the first pose804and the second pose806.

FIG. 8Bgraphically illustrates a second metric852for a second snapshot850, according to some embodiments Like the first snapshot800, the first pose854associated with the input device and the second pose856associated with the target are both 2D positions. The metric852indicates a fitness of how well the pose804associated with the input device is pointing at the target indicated by the pose806. In particular, the distance shown by metric852is shorter than that shown by metric802inFIG. 8B, and therefore metric852is indicative of a better shot than metric802.

FIGS. 9A-9Bgraphically illustrate another example of metrics in a 2D space using a heat map, according to some embodiments.FIG. 9Agraphically illustrates a first snapshot900, according to some embodiments. The first pose902associated with the input device is a 2D position, and the second pose904associated with the target is a heat map. As shown inFIG. 9A, the user's aim as indicated by the pose902on the heat map904would result in a shot with a medium score, since the shot is at the middle ring of the heat map. While the metric is not shown inFIG. 9A, the metric can be a number and/or other value indicative of the medium score of the shot.FIG. 9Bgraphically illustrates a second snapshot950, according to some embodiments. LikeFIG. 9A, the first pose952associated with the input device is a 2D position, and the second pose954associated with the target is a heat map. As shown inFIG. 9B, the user's aim as indicated by the pose952on the heat map954would result in a shot with a low score, since the shot is at the outermost ring of the heat map. While the metric is not shown inFIG. 9B, the metric can be a number and/or other value indicative of the medium score of the shot. Comparing the metrics of the snapshots900and950, the snapshot900has a better metric.

Referring to steps510and512, in some embodiments the video game console can compare the metrics of the plurality of snapshots to determine which snapshot has the highest metric, and determine the adjusted pose based on the determined snapshot. For example, the video game console can determine that snapshot660inFIG. 6Chas the best metric compared to snapshots600and630inFIGS. 6A-6B, respectively, and determine the adjusted pose based on the snapshot660. In some embodiments, the adjusted pose can be set to the pose of the input device in the determined snapshot. In some embodiments, the adjusted pose can be determined based on both the initial pose associated with the input and the first pose of the determined snapshot. For example, the initial pose associated with the input can be adjusted partially based on the first pose of the determined snapshot (e.g., improved by a certain percentage), rather than simply setting the adjusted pose to be the first pose.

Referring further to steps510and512, in some embodiments the video game console can compare the metrics of the plurality of snapshots to determine two or more snapshots with higher metrics than one or more remaining snapshots of the plurality of snapshots (e.g., by comparing the metrics among each other, by comparing the metrics to a threshold, and/or the like). The video game console can determine the adjusted pose based on the determined two or more snapshots. For example, the adjusted pose can be determined based on a weighting function of the first poses in the two or more snapshots. The weightings can, for example, weight each snapshot equally, weight snapshots closer in time to the input higher than snapshots further away in time from the input, weight snapshots closer in time to the input lower than snapshots further away in time from the input, and/or the like.

While exemplary metrics have been discussed that are determined based on a distance or a heat map, other metrics can also be used. In some embodiments, the metrics can be based on the particular aspects of the video game. For example, a metric can be the number of times an aimed laser bounces off of reflected surfaces (e.g., where the more times the laser bounces off reflected surfaces in the game, the better the shot). As another example, a metric can be a shot that causes the target to ricochet in a direction to hit the maximum number of other targets.

The techniques described herein can be used to improve player aim for the Harmonix's AUDICA™ VR rhythm shooter game. AUDICA™ can be played using various VR platforms, such as using a PC configured to work with a VR headset (e.g., the HTC Vive or Oculus Rift headsets) and associated spatially tracked controllers.FIG. 10shows an exemplary display1000of Harmonix's AUDICA™ game, according to some examples. In the display1000, the user is to aim the gun1002at the target1004. When the user presses the trigger (e.g., a button) on the spatially tracked controller of the VR platform, it causes the user's aim to move to location1006, rather than at the target1004(e.g., since the controller is light, and therefore pressing the button unintentionally moves the user's controller). The result is that the game fires where the gun is pointed at the moment in time that the trigger is pressed, resulting in a missed shot.

FIG. 11shows an exemplary display1100of Harmonix's AUDICA™ game using the techniques described herein, according to some embodiments. Like withFIG. 10, in the display1100, the user is to aim the gun1102at the target1104. The system keeps track of the last five snapshots, including where the user was aiming for each of those snapshots, which are shown as aiming location1106A through aiming location1106E. When the user presses the trigger on the controller, rather than simply firing at the aiming location1106A (e.g., like withFIG. 10), the gaming system uses the five snapshots to determine which location to fire from. In this example, the system computes a metric for each snapshot to determine which of the five aiming locations is the most accurate, which is location1106D. Upon determining location1106D is the most accurate, the gaming platform fires at location1106D instead of location1106A. Therefore, the techniques described herein can be used to improve player performance of AUDICA™, since the shot in example1100is a successful shot, whereas otherwise the shot would be unsuccessful due to unintentional controller motion.

The various methods or processes outlined herein may be coded as software that is executable on one or more processors that employ any one of a variety of operating systems or platforms. Additionally, such software may be written using any of numerous suitable programming languages and/or programming or scripting tools, and also may be compiled as executable machine language code or intermediate code that is executed on a virtual machine or a suitable framework.

In this respect, various inventive concepts may be embodied as at least one non-transitory computer readable storage medium (e.g., a computer memory, one or more floppy discs, compact discs, optical discs, magnetic tapes, flash memories, circuit configurations in Field Programmable Gate Arrays or other semiconductor devices, etc.) encoded with one or more programs that, when executed on one or more computers or other processors, implement the various embodiments of the present invention. The non-transitory computer-readable medium or media may be transportable, such that the program or programs stored thereon may be loaded onto any computer resource to implement various aspects of the present invention as discussed above.

The terms “program,” “software,” and/or “application” are used herein in a generic sense to refer to any type of computer code or set of computer-executable instructions that can be employed to program a computer or other processor to implement various aspects of embodiments as discussed above. Additionally, it should be appreciated that according to one aspect, one or more computer programs that when executed perform methods of the present invention need not reside on a single computer or processor, but may be distributed in a modular fashion among different computers or processors to implement various aspects of the present invention.

Computer-executable instructions may be in many forms, such as program modules, executed by one or more computers or other devices. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Typically, the functionality of the program modules may be combined or distributed as desired in various embodiments.

Also, data structures may be stored in non-transitory computer-readable storage media in any suitable form. Data structures may have fields that are related through location in the data structure. Such relationships may likewise be achieved by assigning storage for the fields with locations in a non-transitory computer-readable medium that convey relationship between the fields. However, any suitable mechanism may be used to establish relationships among information in fields of a data structure, including through the use of pointers, tags or other mechanisms that establish relationships among data elements.

Various inventive concepts may be embodied as one or more methods, of which examples have been provided. The acts performed as part of a method may be ordered in any suitable way. Accordingly, embodiments may be constructed in which acts are performed in an order different than illustrated, which may include performing some acts simultaneously, even though shown as sequential acts in illustrative embodiments.

The indefinite articles “a” and “an,” as used herein in the specification and in the claims, unless clearly indicated to the contrary, should be understood to mean “at least one.” As used herein in the specification and in the claims, the phrase “at least one,” in reference to a list of one or more elements, should be understood to mean at least one element selected from any one or more of the elements in the list of elements, but not necessarily including at least one of each and every element specifically listed within the list of elements and not excluding any combinations of elements in the list of elements. This allows elements to optionally be present other than the elements specifically identified within the list of elements to which the phrase “at least one” refers, whether related or unrelated to those elements specifically identified.

The phrase “and/or,” as used herein in the specification and in the claims, should be understood to mean “either or both” of the elements so conjoined, i.e., elements that are conjunctively present in some cases and disjunctively present in other cases. Multiple elements listed with “and/or” should be construed in the same fashion, i.e., “one or more” of the elements so conjoined. Other elements may optionally be present other than the elements specifically identified by the “and/or” clause, whether related or unrelated to those elements specifically identified. Thus, as a non-limiting example, a reference to “A and/or B”, when used in conjunction with open-ended language such as “comprising” can refer, in one embodiment, to A only (optionally including elements other than B); in another embodiment, to B only (optionally including elements other than A); in yet another embodiment, to both A and B (optionally including other elements); etc.

As used herein in the specification and in the claims, “or” should be understood to have the same meaning as “and/or” as defined above. For example, when separating items in a list, “or” or “and/or” shall be interpreted as being inclusive, i.e., the inclusion of at least one, but also including more than one, of a number or list of elements, and, optionally, additional unlisted items. Only terms clearly indicated to the contrary, such as “only one of” or “exactly one of,” or, when used in the claims, “consisting of,” will refer to the inclusion of exactly one element of a number or list of elements. In general, the term “or” as used herein shall only be interpreted as indicating exclusive alternatives (i.e. “one or the other but not both”) when preceded by terms of exclusivity, such as “either,” “one of,” “only one of,” or “exactly one of.” “Consisting essentially of,” when used in the claims, shall have its ordinary meaning as used in the field of patent law.

Use of ordinal terms such as “first,” “second,” “third,” etc., in the claims to modify a claim element does not by itself connote any priority, precedence, or order of one claim element over another or the temporal order in which acts of a method are performed. Such terms are used merely as labels to distinguish one claim element having a certain name from another element having a same name (but for use of the ordinal term).

The phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. The use of “including,” “comprising,” “having,” “containing”, “involving”, and variations thereof, is meant to encompass the items listed thereafter and additional items.

Having described several embodiments of the invention in detail, various modifications and improvements will readily occur to those skilled in the art. Such modifications and improvements are intended to be within the spirit and scope of the invention. Accordingly, the foregoing description is by way of example only, and is not intended as limiting.

Various aspects are described in this disclosure, which include, but are not limited to, the following aspects:

1. A computerized method implemented by a processor in communication with a memory, wherein the memory stores computer-readable instructions that, when executed by the processor, cause the processor to perform: accessing data indicative of a set of snapshots of one or more aspects of a video game over time, wherein each of one or more of the set of snapshots: is associated with a timestamp; and comprises data indicative of: (a) a first pose associated with an input device, and (b) a state of a video game simulation at a time of the snapshot; receiving data indicative of: an input from the input device at a time occurring after one or more timestamps associated with the one or more snapshots; and an initial pose associated with the input device for the input; and determining an adjusted pose for the input based on a relationship, for each snapshot of the one or more snapshots, between (i) the first pose associated with the input device and (ii) the state of the video game simulation at the time of the snapshot.

2. The method of aspect 1, wherein the first pose associated with the input device comprises a ray, the ray comprising a position and a direction; and the state of the video game simulation comprises a position of a target.

3. The method of aspect 1, wherein the one or more snapshots comprise a first snapshot that is associated with a first timestamp, wherein the first timestamp is within (a) a predetermined time period before the time of the input, and/or (b) a predetermined number of snapshots before the time of the input; and the adjusted pose is the first pose associated with the input device of the first snapshot.

4. The method of aspect 1, wherein the one or more snapshots comprise a plurality of snapshots and the one or more timestamps comprise a plurality of timestamps, wherein each one of the plurality of snapshots is associated with a respective one of the plurality of timestamps, each of the plurality of timestamps being (a) within a predetermined time period before the time of the input, and/or (b) within a predetermined number of snapshots before the time of the input.

5. The method of aspect 4, further comprising determining, for each of at least some of the plurality of snapshots, a metric based on the first pose associated with the input device and the state of the video game simulation at the time of the snapshot.

6. The method of aspect 5, wherein the first pose associated with the input device comprises a ray, the ray comprising a position and a direction; the state of the video game simulation comprises a position of a target; and determining the metric for each of the at least some snapshots comprises determining an indication of accuracy of the ray pointing at the position.

7. The method of aspect 5, wherein determining the adjusted pose comprises: comparing determined metrics for the at least some snapshots to determine a snapshot of the at least some snapshots with a highest metric; and determining the adjusted pose based on the determined snapshot.

8. The method of aspect 7, wherein determining the adjusted pose based on the determined snapshot with the highest metric comprises determining the adjusted pose to be the first pose of the determined snapshot.

9. The method of aspect 7, wherein determining the adjusted pose comprises determining the adjusted pose based on the initial pose and the first pose of the determined snapshot.

10. The method of aspect 6, wherein determining the adjusted pose comprises: comparing the metrics of the at least some snapshots to determine a subset of two or more snapshots of the at least some snapshots with higher metrics than one or more remaining snapshots of the at least some snapshots; and determining the adjusted pose based on the determined subset of two or more snapshots.

11. The method of aspect 1, wherein the input device is a spatially tracked controller; and the input is indicative of a button press on the spatially tracked controller.

12. The method of aspect 1, wherein the state of the video game simulation at the time of the snapshot comprises a second pose associated with a target in the snapshot.

13. A non-transitory computer-readable media comprising instructions that, when executed by one or more processors on a computing device, are operable to cause the one or more processors to execute: accessing data indicative of a set of snapshots of one or more aspects of a video game over time, wherein each of one or more of the set of snapshots: is associated with a timestamp; and comprises data indicative of: (a) a first pose associated with an input device, and (b) a state of a video game simulation at a time of the snapshot; receiving data indicative of: an input from the input device at a time occurring after one or more timestamps associated with the one or more snapshots; and an initial pose associated with the input device for the input; and determining an adjusted pose for the input based on a relationship, for each snapshot of the one or more snapshots, between (i) the first pose associated with the input device and (ii) the state of the video game simulation at the time of the snapshot.

14. The non-transitory computer-readable media of aspect 13, wherein the first pose associated with the input device comprises a ray, the ray comprising a position and a direction; and the state of the video game simulation comprises a position of a target.

15. The non-transitory computer-readable media of aspect 13, wherein the one or more snapshots comprises a plurality of snapshots, and the instructions further cause the one or more processors to execute: determining a metric for each of at least some of the plurality of snapshots; comparing determined metrics for the at least some snapshots to determine a snapshot of the at least some snapshots with a highest metric; and determining the adjusted pose based on the determined snapshot with the highest metric.

16. The non-transitory computer-readable media of aspect 15, wherein the first pose associated with the input device comprises a ray, the ray comprising a position and a direction; the state of the video game simulation at the time of the snapshot comprises a position a target; and determining the metric for each of the at least some snapshots comprises determining an indication of accuracy of the ray pointing at the position.

17. A system comprising a memory storing instructions, and a processor configured to execute the instructions to perform: accessing data indicative of a set of snapshots of one or more aspects of a video game over time, wherein each of one or more of the set of snapshots: is associated with a timestamp; and comprises data indicative of: (a) a first pose associated with an input device, and (b) a state of a video game simulation at a time of the snapshot receiving data indicative of: an input from the input device at a time occurring after one or more timestamps associated with the one or more snapshots; and an initial pose associated with the input device for the input; and determining an adjusted pose for the input based on a relationship, for each snapshot of the one or more snapshots, between (i) the first pose associated with the input device and (ii) the second pose associated with the target.

18. The system of aspect 17, wherein the first pose associated with the input device comprises a ray, the ray comprising a position and a direction; and the state of the video game simulation comprises a position of a target.

19. The system of aspect 17, wherein the one or more snapshots comprise a first snapshot that is associated with a first timestamp, wherein the first timestamp is within (a) predetermined time period before the time of the input, and/or (b) a predetermined number of snapshots before the time of the input; and the processor is configured to execute the instructions to perform: determining the adjusted pose to be the first pose associated with the input device of the first snapshot.

20. The system of aspect 17, wherein the one or more snapshots comprises a plurality of snapshots, and the processor is configured to execute the instructions to perform: determining a metric for each of at least some of the plurality of snapshots; comparing determined metrics for the at least some snapshots to determine a snapshot of the at least some snapshots with a highest metric; and determining the adjusted post based on the determined snapshot with the highest metric.

21. The system of aspect 20, wherein the first pose associated with the input device comprises a ray, the ray comprising a position and a direction; the state of the video game simulation comprises a position of a target; and determining the metric for each of the at least some snapshots comprises determining an indication of accuracy of the ray pointing at the position.

Claims

  1. A computerized method implemented by a processor in communication with a memory, wherein the memory stores computer-readable instructions that, when executed by the processor, cause the processor to perform: accessing data indicative of a set of snapshots of one or more aspects of a video game over time, wherein each of one or more of the set of snapshots: is associated with a timestamp;and comprises data indicative of: (a) a first pose associated with an input device, and (b) a state of a video game simulation at a time of the snapshot;receiving data indicative of: an input from the input device at a time occurring after one or more timestamps associated with the one or more snapshots;and an initial pose associated with the input device for the input;and determining an adjusted pose for the input based on a relationship, for each snapshot of the one or more snapshots, between (i) the first pose associated with the input device and (ii) the state of the video game simulation at the time of the snapshot.
  1. The method of claim 1, wherein: the first pose associated with the input device comprises a ray, the ray comprising a position and a direction;and the state of the video game simulation comprises a position of a target.
  2. The method of claim 1, wherein: the one or more snapshots comprise a first snapshot that is associated with a first timestamp, wherein the first timestamp is within (a) a predetermined time period before the time of the input, and/or (b) a predetermined number of snapshots before the time of the input;and the adjusted pose is the first pose associated with the input device of the first snapshot.
  3. The method of claim 1, wherein the one or more snapshots comprise a plurality of snapshots and the one or more timestamps comprise a plurality of timestamps, wherein each one of the plurality of snapshots is associated with a respective one of the plurality of timestamps, each of the plurality of timestamps being (a) within a predetermined time period before the time of the input, and/or (b) within a predetermined number of snapshots before the time of the input.
  4. The method of claim 4, further comprising determining, for each of at least some of the plurality of snapshots, a metric based on the first pose associated with the input device and the state of the video game simulation at the time of the snapshot.
  5. The method of claim 5, wherein: the first pose associated with the input device comprises a ray, the ray comprising a position and a direction;the state of the video game simulation comprises a position of a target;and determining the metric for each of the at least some snapshots comprises determining an indication of accuracy of the ray pointing at the position.
  6. The method of claim 5, wherein determining the adjusted pose comprises: comparing determined metrics for the at least some snapshots to determine a snapshot of the at least some snapshots with a highest metric;and determining the adjusted pose based on the determined snapshot.
  7. The method of claim 7, wherein determining the adjusted pose based on the determined snapshot with the highest metric comprises determining the adjusted pose to be the first pose of the determined snapshot.
  8. The method of claim 7, wherein determining the adjusted pose comprises determining the adjusted pose based on the initial pose and the first pose of the determined snapshot.
  9. The method of claim 6, wherein determining the adjusted pose comprises: comparing the metrics of the at least some snapshots to determine a subset of two or more snapshots of the at least some snapshots with higher metrics than one or more remaining snapshots of the at least some snapshots;and determining the adjusted pose based on the determined subset of two or more snapshots.
  10. The method of claim 1, wherein: the input device is a spatially tracked controller;and the input is indicative of a button press on the spatially tracked controller.
  11. The method of claim 1, wherein the state of the video game simulation at the time of the snapshot comprises a second pose associated with a target in the snapshot.
  12. A non-transitory computer-readable media comprising instructions that, when executed by one or more processors on a computing device, are operable to cause the one or more processors to execute: accessing data indicative of a set of snapshots of one or more aspects of a video game over time, wherein each of one or more of the set of snapshots: is associated with a timestamp;and comprises data indicative of: (a) a first pose associated with an input device, and (b) a state of a video game simulation at a time of the snapshot;receiving data indicative of: an input from the input device at a time occurring after one or more timestamps associated with the one or more snapshots;and an initial pose associated with the input device for the input;and determining an adjusted pose for the input based on a relationship, for each snapshot of the one or more snapshots, between (i) the first pose associated with the input device and (ii) the state of the video game simulation at the time of the snapshot.
  13. The non-transitory computer-readable media of claim 13, wherein: the first pose associated with the input device comprises a ray, the ray comprising a position and a direction;and the state of the video game simulation comprises a position of a target.
  14. The non-transitory computer-readable media of claim 13, wherein the one or more snapshots comprises a plurality of snapshots, and the instructions further cause the one or more processors to execute: determining a metric for each of at least some of the plurality of snapshots;comparing determined metrics for the at least some snapshots to determine a snapshot of the at least some snapshots with a highest metric;and determining the adjusted pose based on the determined snapshot with the highest metric.
  15. The non-transitory computer-readable media of claim 15, wherein: the first pose associated with the input device comprises a ray, the ray comprising a position and a direction;the state of the video game simulation at the time of the snapshot comprises a position a target;and determining the metric for each of the at least some snapshots comprises determining an indication of accuracy of the ray pointing at the position.
  16. A system comprising a memory storing instructions, and a processor configured to execute the instructions to perform: accessing data indicative of a set of snapshots of one or more aspects of a video game over time, wherein each of one or more of the set of snapshots: is associated with a timestamp;and comprises data indicative of: (a) a first pose associated with an input device, and (b) a state of a video game simulation at a time of the snapshot receiving data indicative of: an input from the input device at a time occurring after one or more timestamps associated with the one or more snapshots;and an initial pose associated with the input device for the input;and determining an adjusted pose for the input based on a relationship, for each snapshot of the one or more snapshots, between (i) the first pose associated with the input device and (ii) the second pose associated with the target.
  17. The system of claim 17, wherein: the first pose associated with the input device comprises a ray, the ray comprising a position and a direction;and the state of the video game simulation comprises a position of a target.
  18. The system of claim 17, wherein the one or more snapshots comprise a first snapshot that is associated with a first timestamp, wherein the first timestamp is within (a) predetermined time period before the time of the input, and/or (b) a predetermined number of snapshots before the time of the input;and the processor is configured to execute the instructions to perform: determining the adjusted pose to be the first pose associated with the input device of the first snapshot.
  19. The system of claim 17, wherein the one or more snapshots comprises a plurality of snapshots, and the processor is configured to execute the instructions to perform: determining a metric for each of at least some of the plurality of snapshots;comparing determined metrics for the at least some snapshots to determine a snapshot of the at least some snapshots with a highest metric;and determining the adjusted post based on the determined snapshot with the highest metric.
  20. The system of claim 20, wherein: the first pose associated with the input device comprises a ray, the ray comprising a position and a direction;the state of the video game simulation comprises a position of a target;and determining the metric for each of the at least some snapshots comprises determining an indication of accuracy of the ray pointing at the position.

Disclaimer: Data collected from the USPTO and may be malformed, incomplete, and/or otherwise inaccurate.