U.S. Pat. No. 11,033,819

IMPLEMENTING A GRAPHICAL OVERLAY FOR A STREAMING GAME BASED ON CURRENT GAME SCENARIO

AssigneeMicrosoft Technology Licensing, LLC

Issue DateOctober 17, 2019

Illustrative Figure

Abstract

A system is configured to implement a graphical overlay in a streaming game based on a current game state. Game data generated by a video game is received including game video in the form of a video stream containing game video frames. The game video is displayed on a display screen of a computing device to represent the video game to a user playing the video game at the computing device. At least one feature of the video game is identified at least in the game data. A user interface (UI) control configuration associated with the identified at least one feature is selected from among a plurality of UI control configurations for the video game and a graphical overlay corresponding to the selected UI control configuration is implemented on the video game in the display screen.

Description

The features and advantages of the embodiments described herein will become more apparent from the detailed description set forth below when taken in conjunction with the drawings, in which like reference characters identify corresponding elements throughout. In the drawings, like reference numbers generally indicate identical, functionally similar, and/or structurally similar elements. The drawing in which an element first appears is indicated by the leftmost digit(s) in the corresponding reference number. DETAILED DESCRIPTION I. Introduction The present specification and accompanying drawings disclose one or more embodiments that incorporate the features of the disclosed embodiments. The scope of the embodiments is not limited only to the aspects disclosed herein. The disclosed embodiments merely exemplify the intended scope, and modified versions of the disclosed embodiments are also encompassed. Embodiments are defined by the claims appended hereto. References in the specification to “one embodiment,” “an embodiment,” “an example embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. Furthermore, it should be understood that spatial descriptions (e.g., “above,” “below,” “up,” “left,” “right,” “down,” “top,” “bottom,” “vertical,” “horizontal,” etc.) used herein are for purposes of illustration only, and that practical implementations of the structures described herein can be spatially arranged in any orientation or manner. In the discussion, unless otherwise stated, adjectives such as “substantially” and “about” modifying a condition or relationship characteristic of a feature or features of ...

The features and advantages of the embodiments described herein will become more apparent from the detailed description set forth below when taken in conjunction with the drawings, in which like reference characters identify corresponding elements throughout. In the drawings, like reference numbers generally indicate identical, functionally similar, and/or structurally similar elements. The drawing in which an element first appears is indicated by the leftmost digit(s) in the corresponding reference number.

DETAILED DESCRIPTION

I. Introduction

The present specification and accompanying drawings disclose one or more embodiments that incorporate the features of the disclosed embodiments. The scope of the embodiments is not limited only to the aspects disclosed herein. The disclosed embodiments merely exemplify the intended scope, and modified versions of the disclosed embodiments are also encompassed. Embodiments are defined by the claims appended hereto.

References in the specification to “one embodiment,” “an embodiment,” “an example embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.

Furthermore, it should be understood that spatial descriptions (e.g., “above,” “below,” “up,” “left,” “right,” “down,” “top,” “bottom,” “vertical,” “horizontal,” etc.) used herein are for purposes of illustration only, and that practical implementations of the structures described herein can be spatially arranged in any orientation or manner.

In the discussion, unless otherwise stated, adjectives such as “substantially” and “about” modifying a condition or relationship characteristic of a feature or features of an embodiment of the disclosure, are understood to mean that the condition or characteristic is defined to within tolerances that are acceptable for operation of the embodiment for an application for which it is intended.

Numerous exemplary embodiments are described as follows. It is noted that any section/subsection headings provided herein are not intended to be limiting. Embodiments are described throughout this document, and any type of embodiment may be included under any section/subsection. Furthermore, embodiments disclosed in any section/subsection may be combined with any other embodiments described in the same section/subsection and/or a different section/subsection in any manner.

II. Example Embodiments

To permit a user to play a video game on a touch screen device that it was not designed for, a graphical overlay may be presented on the touch screen device. The graphical overlay provides touch screen controls that the user interacts with to play the video game, mapping user interactions with touch controls of the touch device to the physical controls of the source device for the video game. For example, a video game may be designed for a video game console that a user interacts with using a handheld game controller. The video game may be streamed to a touch device, such as a smart phone. A graphical overlay may be presented on the touch device to map the game controller controls (physical buttons, sticks, etc.) with touch screen controls (e.g., graphical buttons, etc.), for touch interaction by the user.

Such graphical overlays are conventionally configured in several ways. For instance, a two-digit configuration can be used where the user applies their two thumbs to controls on the touch screen. The touch input is mapped to gestures that use only those two digits. However, video games that are designed to be used with game controllers, mouse, and/or keyboard may require the user to manipulate numerous physical controls concurrently. Accordingly, a two-digit configuration on a touch screen can make certain video games unplayable as the user cannot provide all the input signals the video game requires.

In another conventional implementation, a one-size-fits-all graphical overlay may be presented that includes a control on the touch screen for every control of a game controller. However, because game controllers typically include numerous controls of varying types, at various locations, such a control layout may present some controls the user does not need to use, as well as taking up most if not all of the entire display screen, which can make some video games unplayable because the user cannot see the actual video game behind the control layout. Accordingly, a one-size-fits-all configuration is not typically practical from a user experience perspective.

In an attempt to solve these issues, control layouts of graphical overlays may be customized for particular game scenarios. However, many video games switch the control layout based on what is going on in the video game (i.e., the “game state” or “game scenario”). For instance, different sets of controls may be used to operate a video game's menu, to cause an in-game character to walk around, to drive a car, to fly, etc. Accordingly, using a set of controls customized to one game scenario is not typically practical because changes in game scenario lead to controls not exactly matching what is needed.

Embodiments overcome these and other issues related to graphical overlays in video game streaming. In an embodiment, machine learning may be used to determine the current game scenario and cause the switching from a prior graphical overlay to a more appropriate graphical overlay designed for the determined current game scenario. In an alternative embodiment, specific identified pixel arrangements and/or sounds of the video game may identify the current game scenario and cause the switching from the prior graphical overlay to the more appropriate graphical overlay for the determined current game scenario.

In embodiments, game data generated by a video game at a first computing device may include game video in the form of a video stream of video frames. The game data may be streamed to a second computing device. The game video is displayed on a display screen of the second computing device to represent the video game to a user playing the video game. During game play, game-related data (e.g., game video frames, game audio data, streams of input events, hardware usage metrics during video game play, further metadata generated during video game play such as log file contents, API (application programming interface) accesses, etc.) may be analyzed to identify one or more game-related features that correspond to the current game scenario, which may have just changed, or may be in the process of changing, from a prior game scenario. In response to the determination, a user interface (UI) control configuration associated with the identified feature(s) is selected from a plurality of UI control configurations for the video game. A graphical overlay corresponding to the selected UI control configuration is implemented on the video game in the display screen. The graphical overlay is configured for that specific current game scenario, having a specific selected set of controls, locations of the controls, etc., configured to make game play more efficient and enjoyable for the user in that game scenario. The user is enabled to interact with the graphical overlay to play the video game in that current game scenario.

As such, embodiments enable the providing of appropriate controls for specific game scenarios based on identified game-related features. In this way, the user experience for a video game streamed to a touch device is very near the same as the user experience for the video game played on the source device for which it was designed.

In an embodiment, a trained machine learning model may be generated and used to enable an appropriate graphical overlay to be selected for game scenarios. In an embodiment, the trained machine learning model may receive game-related data to identify game features, which are used to generate a confidence score indicative of a confidence level that the feature(s) are actually identified, and that the game has switched to an associated game scenario.

Such a trained learning model may be generated in various ways. For instance, to generate such a model, the video game may be executed in a machine learning (ML) application, such as TensorFlow™, to generate training game data that includes a training video stream. Training indications corresponding to the video game may be inputted into the ML application during training game play. In one embodiment, the training indications are inputted manually (e.g., by a game developer). Alternatively, the training indications may be provided automatically, such as by a computer. The trained machine learning model is generated based on the training indications and the generated training game data.

In some instances, such as with racing games, a graphical overlay may require additional modification to accurately simulate the original input device, such as a game controller. Embodiments enable control tuning for the implemented graphical overlay based on user input. For instance, a game input response curve associated with a physical game controller input device may be linearized. The linearized game input response curve may be tuned for a touch control of the graphical overlay such that a tuned input response curve is associated with the touch control. The tuned input response curve determines how the game input responds to user input during game play. In this manner, user experience may be improved for that touch control.

Embodiments for generating and utilizing graphical overlays for streamed video games may be implemented in various ways. For instance,FIG. 1shows a block diagram of a system100for implementing graphical overlays for a streaming video game, according to an example embodiment. As shown inFIG. 1, system100includes a first computing device102(training device) included in a training phase126, and a second computing device136(game source/originating device) and a third computing device104(game play/client device) included in a live game play phase128. Computing device102includes a video game application106and a machine learning (ML) application108. Computing device136includes a streaming service138that streams a source video game application140. Computing device104includes a display screen112and a video game streaming client114. Video game streaming client114includes a video game event recognition machine language (VGERML) model110, a control configuration determiner118, a game engine120, and a control configuration library122.FIG. 1is described in further detail as follows.

Computing devices102and136may each include any type of computing device, mobile or stationary, such a desktop computer, a server, a video game console, etc. Computing device104may be any type of mobile computing device (e.g., a Microsoft® Surface® device, a personal digital assistant (PDA), a laptop computer, a notebook computer, a tablet computer such as an Apple iPad™, a netbook, etc.), a mobile phone (e.g., a cell phone, a smart phone such as a Microsoft Windows® phone, an Apple iPhone, a phone implementing the Google® Android™ operating system, etc.), a wearable computing device (e.g., a head-mounted device including smart glasses such as Google® Glass™, Oculus Rift® by Oculus VR, LLC, etc.), a stationary computing device such as a desktop computer or PC (personal computer), a gaming console/system (e.g., Microsoft Xbox®, Sony PlayStation®, Nintendo Wii® or Switch®, etc.), etc.

Training phase126is used to generate a machine learning model used during live game play phase128to identify features in game-related data used to select graphical overlays for display. As shown inFIG. 1, video game application106is a video game program (e.g., implemented in program code executed by one or more processors). Video game application106may be any type of video game, including a causal game, a serious game, an educational game, shooter game, driving game, etc. During execution, video game application106generates training game data148, which includes data representative of the video game during play. Training game data148is the game data of the video game generated during training and is presented to the game playing users. For instance, training game data148may include video data for display on a display screen, audio data for play by one or more loudspeakers, and/or other types of data generated during the training phase. During training phase126, video game application106also receives user input data134from one or more user interface devices used by users, such as a game controller, a touch screen, a keyboard, etc. User input data134indicates the actions taken by the user in playing the video game during the training phase (e.g., pushing a button, moving a stick to the right, etc.). User input data134is processed by video game application134to determine how the video game proceeds, and thus is used in the generation of training game data148presented to the user.

ML application108is configured to receive and process training game data148and training indications146to generate a video game event recognition machine language (VGERML) model110. For example, in an embodiment, ML application108may implement a supervised machine-learning algorithm based on the actual game data of training game data148and the training input provided in the form of training indications146to generate VGERML model110. VGERML model110is a machine learning model generated to be used during live game play phase128to identify features in game-related data. The identified features are used to select graphical overlays for display and interaction by the user of a game consumption device different from the source device (computing device102). As noted above, training indications146may be entered manually or by a computer. In embodiments, training indications146may indicate the locations of objects displayed in game video frames of the training game data, may indicate the timing of sounds in audio frames of the training game data, and/or may indicate further game-related aspects.

For instance, objects such as weapons, tools, characters, vehicles, portions thereof (e.g., eyes, headlights, taillights, license plates, etc.), and other objects may be displayed by a video game during game play. Objects such as these or others that are determined to be important to particular game scenarios may be flagged during training game play for model training by a game developer user (or automatically), such as by indicating their location in game video (e.g., by the user indicating an object's location by a point, by drawing a box around the object, etc.). Additionally, or alternatively, training indications146may include indications of the timing of sounds in audio of the training game data. For instance, the user may indicate the time of the sound of a car engine starting up, of a weapon being used (e.g., a chainsaw running, a gun shooting), of a particular character talking, or the like in the video game that are deemed to correlate to a particular game scenario.

In still further embodiments, training indications146may include a stream of input events. For instance, the user performing the model training may indicate particular input events, such as one or more input events that correspond to the user selecting a car from inventory or on screen, one or more input events that correspond to the user selecting a particular weapon, or the like, that correlate to a particular game scenario. In yet further embodiments, training indications146may include indications of particular hardware usage. For instance, indicated hardware usage may include an indication of certain processor utilization levels, memory usage levels, disk accesses, or the like, as well as log file contents, API accesses, etc. that correspond to particular game activities, particular objects being rendered to the screen, such as a car, a truck, a boat, a helicopter, a character, or the like, that correlate to a particular game scenario.

ML application108generates VGERML model110based on training game data148and training indications146. ML application108may use any suitable techniques to generate VGERML110, including supervised ML model generation algorithms such as supervised vector machines (SVM), linear regression, logistic regression, naïve Bayes, linear discriminant analysis, decision trees, k-nearest neighbor algorithm, neural networks, etc. In an embodiment, the generated VGERML model110is capable of providing a confidence level indicative of whether a feature is identified in game data. If the confidence level is sufficient (e.g., over 50%), a graphical overlay may be selected corresponding to a game scenario in which that feature (or feature(s)) is identified.

In particular, upon ML application108generating VGERML model110, training phase126is complete and live game play phase128may begin. To enable live game play phase128, VGERML model110is included in video game streaming client114of computing device104. Video game streaming client114is a client-based application used at computing device104by a game player to play a streamed instance of the video game during live game play phase128. Live game play phase128is described in further detail as follows.

As noted above, computing device104is useable by a user to play a video game that was not designed to be played on computing device104. The video game may be streamed from a computing device, such as computing device136. For instance, source video game application140may be executed on computing device136(e.g., a desktop computer, a game console, etc.). Source video game application140is an instance of video game application106that may be executed by computing device136to enable a game player to play the video game. Furthermore, computing device136includes streaming service138configured to stream game data144of source video game application140to another device, such as computing device104. In particular, streaming service138is configured to transmit game data144over a network, wireless and/or wired, which may include one or more network cables, a local area network (LAN) such as a wireless LAN (WLAN) or “Wi-Fi”), and/or a wide area network (WAN), such as the Internet.

In an embodiment, streaming service138at computing device136and video game streaming client114at computing device104work together to present the video game of source video game application140executed at computing device136to a user at computing device104. In particular, streaming service138streams game data144(e.g., video and audio data) generated by execution of source video game application140to video game streaming client114, which presents the video game to the user at computing device140. In return, video game streaming client114streams user input data142received from the user interacting with the video game at computing device104to streaming service138to provide to source video game application140as user input events. In this manner, though source video game application140executes at a first computing device (computing device136), a user can play the video game at a second computing device (computing device140), even if the video game was not designed for play on the second computing device. Examples of server-client streaming services into which embodiments may be incorporated includes those provided by the Microsoft Xbox® platform, and the Steam® platform provided by Valve Corporation, etc.

In further example detail, as shown inFIG. 1, game engine120of video game streaming client114receives game data144from source video game application140streamed by streaming service138. In an embodiment, video game streaming client114is configured to present the video game to the user of computing device104, including presenting video and audio of the video game at computing device104, as well as receiving user input events provided by the user at computing device104. For example, in embodiments, game engine120may display video of video data in game data144on display screen112, broadcast audio of audio data (if present) in game data144through speakers of computing device104and receive input signals from input controls presented to the user of computing device104. User input data142is received by game engine120from one or more user interfaces at computing device104, including display screen112, which may be a touch screen. User input data142indicates the actions taken by the user in playing the video game during the live game play phase128(e.g., touching displayed controls on display screen112). As shown inFIG. 1, game engine120may transmit the input signals from the input controls to source video game application140as user input data142. User input data142is processed by source video game application140to determine subsequent video game execution, and thus is used in the generation of game data144presented to the user.

In an embodiment, game data144includes game video in the form of a video stream containing game video frames. Display screen112is configured to display the game video to represent the video game to a user playing the video game at the computing device. For instance, and as shown inFIG. 1, display screen112receives and displays video game video116at computing device104. Furthermore, display screen112may display a graphical overlay124over video game video116. Graphical overlay124includes an arrangement of one or more graphical controls, referred to herein as a user interface (UI) control configuration, that a game player may interact with on display screen112during game play.

Control configuration determiner118is configured to select a UI control configuration based on the current game scenario of the video game. To aid in the identification of the game scenario, control configuration determiner118is configured to identify one or more features in game data144. As noted above, the feature may be present in a game video frame, game audio data, a stream of input events provided to the video game, in hardware usage of the computing device, in log files, APIs, etc.

For instance, and as shown inFIG. 1, control configuration determiner118may access VGERML110to identify a feature130. As shown inFIG. 1, VGERML model110receives game data144as input, and may additionally receive input events150from display screen150and hardware/machine usage data152from a task manager or other hardware manager of computing device104. Input events150includes indications of a stream of user input events (UI control interactions) provided by the user at display screen112. Input events150are also received by game engine120to be transmitted by source video game application140in user input data142as described elsewhere herein. Hardware usage data152includes indications of hardware of computing device104during video game play, including processor utilization levels, memory usage levels, disk accesses, log file contents/accesses, API accesses, or the like, that correspond to particular game activities. Based on game data144, and optionally on input events150and/or hardware usage data152, VGERML model110identifies a feature and associated confidence score130. Feature and associated confidence score130are useable to determine a game scenario by configuration determiner118.

In an embodiment, if the confidence score has a predetermined relationship with a threshold value, the feature is determined to be present. For instance, a confidence score may have any suitable range, including 0.0 (low confidence) to 1.0 (high confidence). The confidence score may be compared to a predetermined threshold value, such as 0.5, 0.7, etc. If the confidence score is greater than the threshold value, control configuration determiner118indicates the feature as identified, and is configured to interface with control configuration library122to select the UI control configuration that indicates the identified feature. For instance, and as shown inFIG. 1, control configuration determiner118may select a UI control configuration from control configuration library122. Control configuration library122includes plurality of control configurations with associated graphical overlays that may be presented during play of the video game. In the example ofFIG. 1, in response to the identified feature, control configuration determiner118selected a control configuration132that corresponds to graphical overlay124.

Alternatively, control configuration determiner118may access a feature-to-scenario map to identify a game scenario based on identified features. In particular, one or more features in a feature-to-scenario map may be specifically searched for. If the one or more features are found in game data (e.g., a particular sound, one or more specific pixels having a particular attribute, such as a particular color), the identified feature(s) is/are directly mapped by the feature-to-scenario map to a corresponding current game scenario. For instance, in an example video game, when a character is killed, one or more particular screen pixels in game data144may change to a black color. These black pixels may be a feature that maps to a particular game scenario in the feature-to-scenario map. Game data144may be applied to the feature-to-scenario map, and these pixels, when black, may cause the corresponding change in game scenario indicated in the map (e.g., by selecting/mapping to a corresponding UI control configuration).

Once a UI control configuration is selected, control configuration determiner118is configured to implement the associated graphical overlay124as an overlay to the video game. For instance, and as shown inFIG. 1, control configuration determiner118selects control configuration132, and provides an indication of selected control configuration132to game engine120. In response, game engine120is configured to display graphical overlay124of control configuration132in display screen112as an overlay to video game video116.

For example, if a tail lights of a car are identified in video data as a feature with high confidence value (e.g., 0.8), the identification of the tail lights may identify the presence of a car, and thus may indicate the game player as having selected a car to drive in the video game. As such, a control configuration having a graphical overlay for driving a car may be selected from control configuration library122based on the identified tail lights feature. The graphical overlay may include one or more graphical controls for steering the car, for throttling the car, for braking the car, etc. Upon identification of the tail lights, the graphical overlay may be displayed on display screen112, and the game player may interact with the graphical overlay to drive the car.

In embodiments, system100may operate in various ways to perform its functions. For example,FIG. 2shows a flowchart200for generating a trained machine learning model for video game event recognition, according to an example embodiment. In an embodiment, flowchart200may be performed by computing device102. For the purposes of illustration, flowchart200ofFIG. 2is described with continued reference toFIG. 1. It is noted that flowchart200relates to the recognizing of displayed objects in game video data, but in other embodiments, sounds may be identified in game audio data, input events may be identified in user input data, and machine usage information may be identified in machine usage data in a similar fashion. Any of such data may be used as training data used to generate the machine learning model.

Flowchart200ofFIG. 2begins with step202. In step202, the video game is executed to generate training game data that includes a training video stream. For instance, with reference toFIG. 1, video game application106may be executed in computing device102to generate training game data148. As described above, training game data148is the game data of the video game generated during training and is presented to the game-playing users. During training phase126, video game application106also receives user input data134from one or more user interface devices used by users, such as a game controller, a touch screen, a keyboard, etc. User input data134is received by video game application106during execution and is used in the generation of further instances of training game data148presented to the user.

For example, a game player may interact with a control to fire an in-game weapon. The game player's interactions with the control are received in user input data134. Video game application106executes the video game to incorporate the weapon firing and any effects thereof, which may be output in training game data148.

In step204, training indications are received of objects displayed in game video frames of the training video stream. For instance, with reference toFIG. 1, ML application108receives training indications of objects displayed in game video frames of the training game data. For example, ML application108may receive indications of a car's tail lights, a car's license plate, and/or other indications of the rear of a car, as training indications. Such training indications are provided as features associated with the rear-end of a car to train the machine learning model to recognize the rear-end of a car, which may be displayed on the display screen for the video game when the user's character is about to enter a car for driving. In such a circumstance, the user may desire the computing device to automatically display a graphical overlay to the display screen that includes controls for driving a car.

Such training indications may be provided in any suitable manner. For example, the game developer may indicate a screen location/region for an object, and an identifier for the object, as a training indication. For instance, a data pair may be provided as a training indication. The game developer may draw a rectangle around a car's tail lights displayed on the display screen in a video frame as a location of an example of car tail lights, and may identify the rectangle as including a car's tail lights and/or rear end. As described above, a rear end of a car may be a desired feature to identify in game video as being indicative of the game player having selected a car for driving in the video game.

In step206, the training video stream and the training indications are applied to a machine learning algorithm to generate the trained machine learning model. For instance, with reference toFIG. 1, ML application108applies the training video stream and the training indications to generate VGERML model110. Following the above example, VGERML model110may be a trained model for a car-related video game, and thus may be trained to be capable of determining that a character is about to drive a car, among other things (e.g., identifying weapons, vehicles, characters, etc.). The machine learning algorithm may receive many various training indications associated with identifying a car's rear end to use to learn how to recognize a car's rear end during video game play.

Note that VGERML model110may be generated in various forms. In accordance with one embodiment, ML application108may generate VGERML model110according to a suitable supervised machine-learning algorithm mentioned elsewhere herein or otherwise known. For instance, ML application108may implement a gradient boosted tree algorithm or other decision tree algorithm to generate and/or train VGERML model110in the form of a decision tree. The decision tree may be traversed with input data (video data, audio data, input events, machine usage data, etc.) to identify a feature. Alternatively, application108may implement an artificial neural network learning algorithm to generate VGERML model110as a neural network that is an interconnected group of artificial neurons. The neural network may be presented with input data to identify a feature.

As noted above, VGERML model110is included in video game streaming client114to be used to select and implement graphical overlays based on a current game scenario. Any number of such configured video game streaming clients114may be implemented in corresponding user devices to enable embodiments therein. Video game streaming client114may operate in various ways to perform this function. For example,FIG. 3shows a flowchart300for selecting and implementing a graphical overlay on a display screen based on a current game state, according to an example embodiment. For the purposes of illustration, flowchart300is described with continued reference toFIG. 1and with reference toFIG. 4.FIG. 4shows relevant portions of computing device104ofFIG. 1for selecting and implemented a graphical overlay based on a current video game state, according to an example embodiment.

Video game streaming client114and display screen112ofFIG. 4operate in a substantially similar manner as described above with respect toFIG. 1. Flowchart300may be performed by video game streaming client114. As shown inFIG. 4, video game streaming client114includes control configuration determiner118, game engine120, and storage404. Game engine120includes an optional game video modifier410. Control configuration determiner118includes a feature identifier406, which includes VGERML model110, and a control configuration selector408. Storage includes control configuration library122ofFIG. 1. Other structural and operational embodiments will be apparent to persons skilled in the relevant art(s) based on the following discussion regarding flowchart300.

Flowchart300begins with step302. In step302, game data generated by a video game is received, the game data including game video in the form of a video stream containing game video frames. For instance, and with reference toFIG. 1, game engine120receives game data144including video game video116generated by a video game. As shown inFIG. 1, game data144may be streamed to video game streaming client114from source video game application140by streaming service138.

In step304, the game video is displayed on the display screen to represent the video game to a user playing the video game at the computing device. For instance, with reference toFIG. 1, video game video116is extracted from game data144and provided to display screen112by game engine120. Display screen112displays video game video116to the user playing the video game at computing device104.

In step306, at least one feature of the video game is identified in at least in the game data. For instance, with reference toFIG. 3, feature identifier406may be configured to identify one or more features of the video game in the game data. Feature identifier406identifies features of the video game by use of VGERML model110. As indicated above, game data144may include a stream of game video frames and game audio data. Further game-related data may include user input events150and hardware usage data152. In embodiments, VGERML model110may receive any one or more of video data and/or audio data of game data144, user input events150, and/or hardware usage data152, to identify a feature of the video game. As described above, VGERML110may generate an indication of a feature and an associated confidence value indicating a likelihood that the indicated feature is present. The identified feature may be any video game feature, including a visual object or a sound. Furthermore, multiple features may be identified simultaneously, or a sequence of features may be identified.

For example, VGERML model110may generate feature130to indicate a first feature of music changing to a specific track, with a confidence value of 0.9, and a second feature of the screen darkening, with a confidence value of 0.95.

Alternatively, and as described above, feature identifier406ofFIG. 4may be configured to identify one or more features of the video game in the game data directly, such as specific pixel arrangements (e.g., specific pixels having specific attributes such as color) and/or sounds, included in a feature-to-scenario map. For example, a specific color of one or more particular pixels and/or a specific sound may be mapped by the feature-to-scenario map to a particular game scenario, resulting in a change of game scenarios (e.g., from a game character walking to the character riding in a car). In such a case, instead of using VGERML model110, the feature-to-scenario map may be used by control configuration determiner118. The feature-to-scenario map maps specific game data items (e.g., attributes for particular pixels, sounds, etc.) to corresponding UI control configurations for associated game scenarios. The feature-to-scenario map may be manually constructed (e.g., by a game developer), or may be configured in other ways in embodiments.

In step308, a user interface (UI) control configuration associated with the identified at least one feature is selected from a plurality of UI control configurations for the video game, each of the UI control configurations defining a corresponding graphical overlay to the video game configured to be interacted with in a corresponding live game scenario of the video game. For instance, with reference toFIG. 4, control configuration selector408may select UI control configuration132from control configuration library122based on feature130. For instance, UI control configuration132may be associated with feature130in library122, such that when feature130is identified (with acceptably high confidence), UI control configuration132is selected. In an embodiment, UI control configuration132defines/includes graphical overlay124. Graphical overlay124is configured for game play in a particular scenario of the video game.

Note that in some embodiments, a single identified feature may be used by control configuration to select a UI control configuration. For example, if tail lights are an identified feature, a UI control configuration for control of a car may be selected. In other embodiments, multiple identified features may be used by control configuration to select a UI control configuration. For instance, multiple car-related features (e.g., tail lights, license plate, rear windshield) may need to be identified in order to select a UI control configuration for control of a car.

In step310, the graphical overlay corresponding to the selected UI control configuration is implemented on the video game in the display screen. For instance, with reference toFIG. 3, game engine120may access graphical overlay124(via control configuration selector408) associated with selected UI control configuration132. Game engine120displays graphical overlay124as an overlay to video game video116in display screen112.

Accordingly, upon graphical overlay124being displayed in display screen112, the game player may interact with the displayed controls of graphical overlay124. Graphical overlay124corresponds with the current game scenario, which corresponds to the presence of one or more features identified in game-related data as described above.

Note that in an embodiment, video game modifier410may be present to modify the stream of video data received from the video game and displayed as video game video116. For example, in an embodiment, video game modifier410is configured to render an image to be displayed as a frame (or multiple frames) of video game video116. Video game modifier410may composite the image into a received video frame to create a modified video frame in video game video116. Video game modifier410may alternatively generate a completely new video frame that is inserted into the video stream for display as video game video116, and/or to replace a video frame therein.

As described above, an object displayed during video game play may be indicative of a change in game scenario, and thus may be a trigger to change game overlays. Accordingly,FIG. 6shows a flowchart600for identifying an object in a video game, according to an example embodiment. Flowchart600may be performed during step306of flowchart300(FIG. 3). For purposes of illustration, flowchart600is described with continued reference toFIG. 4and with reference toFIG. 5. It is noted that flowchart600is described with respect to the recognizing of displayed objects in game video data, but in other embodiments, sounds may be identified in game audio data as features. Furthermore, features may be identified using input events in user input data, and machine usage data. Embodiments are directed to any of such data, in any combination, used to identify features of a video game.

FIG. 5shows a block diagram of relevant portions of computing device104ofFIG. 1for identifying features in a video game that are used to select a graphical overlay, according to an example embodiment. Flowchart600may be performed by feature identifier306. Other structural and operational embodiments will be apparent to persons skilled in the relevant art(s) based on the following discussion regarding flowchart600. Flowchart600is described as follows.

Flowchart600begins with step602. In step602, a predetermined screen region of the display screen is analyzed for an image of an object. For instance, with reference toFIG. 5, VGERML model110of feature identifier302may analyze game video data116(of game data144inFIG. 1) in all display regions, including predetermined screen region518(shown on display screen112) for a displayed object. VGERML model110may be configured to focus on predetermined screen region518due to it being a location where an object of interest is known to appear during game play in correlation with a change of game scenario. For instance, predetermined screen region518may have been indicated as a location of the object during training of VGERML model110(e.g., flowchart200ofFIG. 2).

For example, as shown inFIG. 5, predetermined screen region518may be the predetermined location for car tail lights to appear when a user controls their in-game character to select and enter a car to drive. Tail lights identified in predetermined screen region518correlates highly with the game entering a car driving scenario. As such, VGERML model110may identify tail lights of a car in region518.

In step604, a confidence score associated with the object is determined indicating a probability of the image of the object being contained in the predetermined screen region. For instance, with reference toFIG. 5, VGERML model110determines a confidence score associated with the object indicating a probability of the image of the object being contained in the predetermined screen region.

For example, as shown inFIG. 5, feature identifier306outputs a feature508generated by VGERML model110. Feature508may indicate a car tail lights were identified and the associated generated confidence score. Control configuration selector408receives feature508, and if the confidence score is sufficiently high (e.g., greater than a predetermined threshold), control configuration selector408selects the UI control configuration from library122that correlates to the identified feature. In the example ofFIG. 5, control configuration selector408selects a car UI control configuration502, which correlates with the identified car tail lights.

Control configuration selector408provides a car UI overlay510associated with car UI control configuration502to game engine120. Game engine120is configured to display car UI overlay510over video game video116in display screen112. As such, the user can interact with controls of car UI overlay510to drive a car by their in-game character. Car UI overlay510is customized for car driving in the video game, rather than being a generic UI overlay, thereby improving the user's game experience.

As noted above, to accommodate certain video games, embodiments enable the tuning of the input response provided by controls of graphical overlays. Game developers may create dead zones and custom response curves for UI controls during the process of targeting the UI controls for a specific targeted input device. For instance, the target input device may be a keyboard, a mouse, or a game controller. Accordingly, when implementing game streaming the input response of a control may be incorrect because the new input control at the client computing devices (e.g., a touch control) is not the originally intended input control, which may be a stick on a game controller. Thus, as described hereinafter, the system provides for the tuning of the input response of UI controls in the client devices.

For example, a two-dimensional control such as a thumbstick has a two-dimensional turning curve, and may have geometry, such as radial, elliptical or square. If the video game is tuned for a game controller, the horizontal axis on a thumbstick may pivot from −1 to +1 to steer the car left or right. The game developer may program a region in the thumbstick middle pivot region (e.g., −0.25 to +0.25) to be a dead zone, where no steering occurs when the thumbstick is positioned there, which may prevent the steering from feeling too twitchy to the user. Furthermore, at the end of the response curve, a more extreme response may be programmed to enable hard turns when the user makes extreme movements with the thumbstick. However, when used on a different device, such as a touch screen, the user will have a poor experience because a touch control will not necessarily perform well with the thumbstick response curve.

As such, in an embodiment, control tuning may be used to flatten the game tuning, with a uniform response for the control that is 1 to 1 with the response within the game. After the flattening, the control may be tuned for the client control, such as a touch screen. Such tuning may include, for example, inserting one or more dead zones having corresponding ranges, clipping regions of curves, changing the response curvature, etc.

Accordingly, in embodiments, control configuration determiner118may be configured to enable control tuning. For instance, control configuration determiner118may operate according toFIG. 8.FIG. 8shows a flowchart800for tuning control associated with a touch screen graphical overlay, according to an example embodiment. For the purposes of illustration, flowchart800is described with continued reference toFIG. 4andFIG. 5and with reference toFIG. 7.FIG. 7shows a block diagram of relevant portions of computing device104ofFIG. 1for tuning a user interface control of a video game graphical overlay, according to an example embodiment.

Flowchart800may be performed by game input response tuner720. Other structural and operational embodiments will be apparent to persons skilled in the relevant art(s) based on the following discussion regarding flowchart800. Flowchart800is described as follows.

Flowchart800begins with step802. In step802, a game input response curve associated with a physical game controller input device is linearized. For instance, with reference to the example ofFIG. 7, game input response tuner720linearizes a control input response curve702associated with a control of a car UI control configuration706. For instance, control input response curve702may have originally been associated with a stick or button of a game controller, and thus may not be linear as described above. Game input response tuner720is configured to flatten control input response curve702to have an input response of 1:1 (not curved).

In step804, the linearized game input response curve is tuned for a touch input of the graphical overlay. For instance, with reference toFIG. 7, game input response tuner720enables a game developer or other user to input tuning information for control input response curve702of the graphical overlay. In embodiments, the tuning information may be in the form of touch input adjustments at display screen112, which displays car graphical overlay710associated with car UI control configuration706. The game develop may be enabled to adjust the tuning of control input response curved702in any manner, including by interacting with one or more tuning controls displayed at display screen112by game input response tuner720. Game input response tuner720enables tuning of control input response curve702from the flattened version (of step802), to have any desired curvature, as well as having one or more dead regions if desired.

In step806, the tuned game input response curve is associated with the touch input of the graphical overlay. For instance, as shown inFIG. 7, game input response tuner720associates control input response curve702with the associated control of car graphical overlay710defined in car UI control configuration702.

III. Example Computer System Implementation

Computing device102, computing device104, video game application106, ML application108, VGERML model110, display screen112, video game streaming client114, video game video116, control configuration determiner118, game engine120, control configuration library122, game input response tuner720, storage404, feature identifier406, control configuration selector408, game video modifier410, flowchart200, flowchart300, flowchart600, and flowchart800, may be implemented in hardware, or hardware combined with one or both of software and/or firmware. For example, computing device102, computing device104, video game application106, ML application108, VGERML model110, display screen112, video game streaming client114, video game video116, control configuration determiner118, game engine120, control configuration library122, game input response tuner720, storage404, feature identifier406, control configuration selector408, game video modifier410, flowchart200, flowchart300, flowchart600, and flowchart800, may be implemented as computer program code/instructions configured to be executed in one or more processors and stored in a computer readable storage medium. Alternatively, computing device102, computing device104, video game application106, ML application108, VGERML model110, display screen112, video game streaming client114, video game video116, control configuration determiner118, game engine120, control configuration library122, game input response tuner720, storage404, feature identifier406, control configuration selector408, game video modifier410, flowchart200, flowchart300, flowchart600, and flowchart800, may be implemented as hardware logic/electrical circuitry.

For instance, in an embodiment, one or more, in any combination, of computing device102, computing device104, video game application106, ML application108, VGERML model110, display screen112, video game streaming client114, video game video116, control configuration determiner118, game engine120, control configuration library122, game input response tuner720, storage404, feature identifier406, control configuration selector408, game video modifier410, flowchart200, flowchart300, flowchart600, and flowchart800, may be implemented together in a SoC. The SoC may include an integrated circuit chip that includes one or more of a processor (e.g., a central processing unit (CPU), microcontroller, microprocessor, digital signal processor (DSP), etc.), memory, one or more communication interfaces, and/or further circuits, and may optionally execute received program code and/or include embedded firmware to perform functions.

FIG. 9depicts an exemplary implementation of a computing device900in which embodiments may be implemented. For example, computing device102and computing device104may each be implemented in one or more computing devices similar to computing device900in stationary or mobile computer embodiments, including one or more features of computing device900and/or alternative features. The description of computing device900provided herein is provided for purposes of illustration, and is not intended to be limiting. Embodiments may be implemented in further types of computer systems, as would be known to persons skilled in the relevant art(s).

As shown inFIG. 9, computing device900includes one or more processors, referred to as processor circuit902, a system memory904, and a bus906that couples various system components including system memory904to processor circuit902. Processor circuit902is an electrical and/or optical circuit implemented in one or more physical hardware electrical circuit device elements and/or integrated circuit devices (semiconductor material chips or dies) as a central processing unit (CPU), a microcontroller, a microprocessor, and/or other physical hardware processor circuit. Processor circuit902may execute program code stored in a computer readable medium, such as program code of operating system930, application programs932, other programs934, etc. Bus906represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. System memory904includes read only memory (ROM)908and random-access memory (RAM)910. A basic input/output system912(BIOS) is stored in ROM908.

Computing device900also has one or more of the following drives: a hard disk drive914for reading from and writing to a hard disk, a magnetic disk drive916for reading from or writing to a removable magnetic disk918, and an optical disk drive920for reading from or writing to a removable optical disk922such as a CD ROM, DVD ROM, or other optical media. Hard disk drive914, magnetic disk drive916, and optical disk drive920are connected to bus906by a hard disk drive interface924, a magnetic disk drive interface926, and an optical drive interface928, respectively. The drives and their associated computer-readable media provide nonvolatile storage of computer-readable instructions, data structures, program modules and other data for the computer. Although a hard disk, a removable magnetic disk and a removable optical disk are described, other types of hardware-based computer-readable storage media can be used to store data, such as flash memory cards, digital video disks, RAMs, ROMs, and other hardware storage media.

A number of program modules may be stored on the hard disk, magnetic disk, optical disk, ROM, or RAM. These programs include operating system930, one or more application programs932, other programs934, and program data936. Application programs932or other programs934may include, for example, computer program logic (e.g., computer program code or instructions) for implementing computing device102, computing device104, video game application106, ML application108, VGERML model110, display screen112, video game streaming client114, video game video116, control configuration determiner118, game engine120, control configuration library122, game input response tuner720, storage404, feature identifier406, control configuration selector408, game video modifier410, flowchart200, flowchart300, flowchart600, and flowchart800, and/or further embodiments described herein.

A user may enter commands and information into computing device900through input devices such as keyboard938and pointing device940. Other input devices (not shown) may include a microphone, joystick, game pad, satellite dish, scanner, a touch screen and/or touch pad, a voice recognition system to receive voice input, a gesture recognition system to receive gesture input, or the like. These and other input devices are often connected to processor circuit902through a serial port interface942that is coupled to bus906, but may be connected by other interfaces, such as a parallel port, game port, or a universal serial bus (USB).

A display screen944is also connected to bus906via an interface, such as a video adapter946. Display screen944may be external to, or incorporated in computing device900. Display screen944may display information, as well as being a user interface for receiving user commands and/or other information (e.g., by touch, finger gestures, virtual keyboard, etc.). In addition to display screen944, computing device900may include other peripheral output devices (not shown) such as speakers and printers.

Computing device900is connected to a network948(e.g., the Internet) through an adaptor or network interface950, a modem952, or other means for establishing communications over the network. Modem952, which may be internal or external, may be connected to bus906via serial port interface942, as shown inFIG. 9, or may be connected to bus906using another interface type, including a parallel interface.

As used herein, the terms “computer program medium,” “computer-readable medium,” and “computer-readable storage medium” are used to refer to physical hardware media such as the hard disk associated with hard disk drive914, removable magnetic disk918, removable optical disk922, other physical hardware media such as RAMs, ROMs, flash memory cards, digital video disks, zip disks, MEMs, nanotechnology-based storage devices, and further types of physical/tangible hardware storage media. Such computer-readable storage media are distinguished from and non-overlapping with communication media (do not include communication media). Communication media embodies computer-readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wireless media such as acoustic, RF, infrared and other wireless media, as well as wired media. Embodiments are also directed to such communication media that are separate and non-overlapping with embodiments directed to computer-readable storage media.

As noted above, computer programs and modules (including application programs932and other programs934) may be stored on the hard disk, magnetic disk, optical disk, ROM, RAM, or other hardware storage medium. Such computer programs may also be received via network interface950, serial port interface942, or any other interface type. Such computer programs, when executed or loaded by an application, enable computing device900to implement features of embodiments discussed herein. Accordingly, such computer programs represent controllers of the computing device900.

Embodiments are also directed to computer program products comprising computer code or instructions stored on any computer-readable medium. Such computer program products include hard disk drives, optical disk drives, memory device packages, portable memory sticks, memory cards, and other types of physical storage hardware.

IV. Additional Example Embodiments

A method in a computing device having a display screen is described herein. The method comprises receiving game data generated by a video game, the game data including game video in the form of a video stream containing game video frames; displaying, on the display screen, the game video to represent the video game to a user playing the video game at the computing device; identifying at least one feature of the video game at least in the game data; selecting a user interface (UI) control configuration associated with the identified at least one feature from a plurality of UI control configurations for the video game, each of the UI control configurations defining a corresponding graphical overlay to the video game configured to be interacted with in a corresponding live game scenario of the video game; and implementing, on the video game in the display screen, the graphical overlay corresponding to the selected UI control configuration.

In one embodiment of the foregoing method, said identifying comprises identifying a feature of the video game in at least one of a game video frame of the game data, game audio data of the game data, a stream of input events provided to the video game, or usage of hardware of the computing device.

In another embodiment of the foregoing method, at least one feature includes an object, and said identifying further comprises: analyzing a predetermined screen region of the display screen for an image of the object; and determining a confidence score associated with the object indicating a probability of the image of the object being contained in the predetermined screen region.

In yet another embodiment of the foregoing method, said analyzing comprises: applying a portion of a game video frame of the video stream containing the image of the object to a trained machine learning model to generate the confidence score.

In yet another embodiment of the foregoing method, the method further comprises executing the video game to generate training game data that includes a training video stream; receiving training indications of objects displayed in game video frames of the training game data that includes the training video stream; and applying the training game data that includes the training video stream and the training indications to a machine learning algorithm to generate the trained machine learning model.

In yet another embodiment of the foregoing method, the display screen is a touch screen, and the method further comprises: tuning control associated with the selected UI control configuration corresponding to the graphical overlay for the touch screen, said tuning comprising: linearizing a game input response curve associated with a physical game controller input device, tuning the linearized game input response curve for a touch input of the graphical overlay, and associating the tuned game input response curve with the touch input of the graphical overlay.

In yet another embodiment of the foregoing method, the game data includes a game video frame that includes an image rendered by the computing device and composited into the game video frame.

A system in a computing device is described herein. The system includes: a video game streaming client comprising: a game engine configured to receive game data generated by a video game, the game data including game video in the form of a video stream containing game video frames; and display the game video on the display screen to represent the video game to a user playing the video game at the computing device; and a control configuration determiner comprising: a feature identifier configured to: identify at least one feature of the video game at least in the game data; and a control configuration selector configured to: select a user interface (UI) control configuration associated with the identified at least one feature from a plurality of UI control configurations for the video game, each of the UI control configurations defining a corresponding graphical overlay to the video game configured to be interacted with in a corresponding live game scenario of the video game, and implement the graphical overlay corresponding to the selected UI control configuration on the video game in the display screen.

In one embodiment of the foregoing system, the feature identifier is further configured to: identify a feature of the video game in at least one of: a game video frame of the game data, game audio data of the game data, a stream of input events provided to the video game, or usage of hardware of the computing device.

In another embodiment of the foregoing system, the at least one feature includes an object, and the feature identifier is further configured to: analyze a predetermined screen region of the display screen for an image of the object; and determine a confidence score associated with the object indicating a probability of the image of the object being contained in the predetermined screen region.

In yet another embodiment of the foregoing system, the feature identifier is configured to: apply a portion of a game video frame of the video stream containing the image of the object to a trained machine learning model to generate the confidence score.

In yet another embodiment of the foregoing system, the display screen is a touch screen, and the video game streaming client further comprises: a game input response tuner configured to tune control associated with the selected UI control configuration corresponding to the graphical overlay for the touch screen, the game input response tuner configured to: linearize a game input response curve associated with a physical game controller input device, tune the linearized game input response curve for a touch input of the graphical overlay, and associate the tuned game input response curve with the touch input of the graphical overlay.

In yet another embodiment of the foregoing system, the game data includes a game video frame, and the game engine includes a game video modifier configured to: render an image; and composite the image into the game video frame.

A computer-readable medium having computer program logic recorded thereon that when executed by at least one processor causes the at least one processor to perform a method, the method comprises: receiving game data generated by a video game, the game data including game video in the form of a video stream containing game video frames; displaying, on a display screen, the game video to represent the video game to a user playing the video game at the computing device; identifying at least one feature of the video game at least in the game data; selecting a user interface (UI) control configuration associated with the identified at least one feature from a plurality of UI control configurations for the video game, each of the UI control configurations defining a corresponding graphical overlay to the video game configured to be interacted with in a corresponding live game scenario of the video game; and implementing, on the video game in the display screen, the graphical overlay corresponding to the selected UI control configuration.

In one embodiment of the foregoing computer-readable medium, said identifying comprises: identifying a feature of the video game in at least one of: a game video frame of the game data, game audio data of the game data, a stream of input events provided to the video game, or usage of hardware of the computing device.

In another embodiment of the foregoing computer-readable medium, the at least one feature includes an object, and said identifying further comprising: analyzing a predetermined screen region of the display screen for an image of the object; and receiving a confidence score associated with the object indicating a probability of the image of the object being contained in the predetermined screen region.

In another embodiment of the foregoing computer-readable medium, said analyzing comprises: applying a portion of a game video frame of the video stream containing the image of the object to a trained machine learning model to generate the confidence score.

In another embodiment of the foregoing computer-readable medium, the method further comprises: executing the video game to generate a training video stream; receiving training indications of objects displayed in game video frames of the training video stream; and applying the training video stream and the training indications to a machine learning algorithm to generate the trained machine learning model.

In another embodiment of the foregoing computer-readable medium, the display screen is a touch screen, and the method further comprises: tuning control associated with the selected UI control configuration corresponding to the graphical overlay for the touch screen, said tuning comprising: linearizing a game input response curve associated with a physical game controller input device, tuning the linearized game input response curve for a touch input of the graphical overlay, and associating the tuned game input response curve with the touch input of the graphical overlay.

In another embodiment of the foregoing computer-readable medium, the game data includes a game video frame that includes an image rendered by the computing device and composited into the game video frame.

V. Conclusion

While various embodiments of the present application have been described above, it should be understood that they have been presented by way of example only, and not limitation. It will be understood by those skilled in the relevant art(s) that various changes in form and details may be made therein without departing from the spirit and scope of the application as defined in the appended claims. Accordingly, the breadth and scope of the present application should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.

Claims

  1. A method in a computing device having a display screen, the method comprising: identifying at least one feature of a video game in game data relating to the video game, the at least one feature reflecting at least one of audio output by the video game, video output by the video game, or input events provided when the video game is being played;selecting a user interface control configuration associated with the at least one feature;and implementing, on the display screen, a graphical overlay corresponding to the selected user interface control configuration, the graphical overlay configured to be interacted with in a corresponding live game scenario of the video game.
  1. The method of claim 1 , wherein the method further comprises: receiving the audio or video output by the video game from a server executing the video game and streaming the audio or video to the computing device.
  2. The method of claim 2 , wherein the game data includes audio data corresponding to the audio output by the video game;wherein said identifying comprises identifying, based on the audio data, an audio feature in the audio output by the video game;and wherein said selecting comprises selecting the selected user interface control configuration from a plurality of user interface control configurations for the video game based at least on the audio feature.
  3. The method of claim 2 , further comprising: displaying the video output by the video game on the display screen;wherein said identifying comprises identifying an object in the video displayed on the display screen;and wherein said selecting comprises selecting the selected user interface control configuration from a plurality of user interface control configurations for the video game based at least on the object identified in the video.
  4. The method of claim 1 , wherein said identifying comprises: identifying a particular feature used to select the selected user interface control configuration in a stream of input events provided to the video game when the video game is being played.
  5. The method of claim 1 , wherein the at least one feature includes an object, and said identifying further comprising: analyzing a predetermined screen region of the display screen for an image of the object;and determining a confidence score associated with the object indicating a probability of the image of the object being contained in the predetermined screen region.
  6. The method of claim 1 , wherein the display screen is a touch screen, the method further comprising: tuning control associated with the selected user interface control configuration corresponding to the graphical overlay for the touch screen, said tuning comprising: linearizing a game input response curve associated with a physical game controller input device, tuning the linearized game input response curve for a touch input of the graphical overlay, and associating the tuned game input response curve with the touch input of the graphical overlay.
  7. A system implemented in a computing device having a display screen, the system comprising: a feature identifier configured to identify at least one feature of a video game in game data relating to the video game, the at least one feature reflecting at least one of audio output by the video game, video output by the video game, or input events provided when the video game is being played;and a control configuration selector configured to: select a user interface control configuration associated with the at least one feature, and implement a graphical overlay corresponding to the selected user interface control configuration, the graphical overlay configured to be interacted with by a user in a corresponding live game scenario of the video game.
  8. The system of claim 8 , wherein the feature identifier is configured to receive the audio or video output by the video game from a server executing the video game and streaming the audio or video to the computing device.
  9. The system of claim 9 , wherein: the feature identifier is configured to identify an audio feature in the audio output by the video game when the video game is being played;and the control configuration selector is configured to select the selected user interface control configuration from a plurality of user interface control configurations for the video game based at least on the audio feature.
  10. The system of claim 9 , wherein: the feature identifier is configured to identify an object in the video output by the video game when the video game is being played;and the control configuration selector is configured to select the selected user interface control configuration from a plurality of user interface control configurations for the video game based at least on the object identified in the video.
  11. The system of claim 8 , wherein the feature identifier is configured to identify a particular feature used to select the selected user interface control configuration in a stream of input events provided to the video game when the video game is being played.
  12. The system of claim 8 , wherein the at least one feature includes an object, and the feature identifier is configured to: analyze a predetermined screen region of the display screen for an image of the object;and determine a confidence score associated with the object indicating a probability of the image of the object being contained in the predetermined screen region.
  13. The system of claim 8 , wherein the display screen is a touch screen, and the system further comprises: a game input response tuner configured to tune control associated with the selected user interface control configuration corresponding to the graphical overlay for the touch screen, including being configured to: linearize a game input response curve associated with a physical game controller input device, tune the linearized game input response curve for a touch input of the graphical overlay, and associate the tuned game input response curve with the touch input of the graphical overlay.
  14. A hardware computer-readable medium having program code recorded thereon that, when executed by at least one processor, causes the at least one processor to perform acts comprising: identifying at least one feature of a video game in game data relating to the video game, the at least one feature reflecting at least one of audio output by the video game, video output by the video game, or input events provided when the video name is being played;selecting a user interface control configuration associated with the at least one feature;and implementing a graphical overlay corresponding to the selected user interface control configuration, the graphical overlay configured to be interacted with to control the video game in a corresponding live game scenario of the video game.
  15. The hardware computer-readable medium of claim 15 , wherein the acts further comprise: receiving the audio or video output by the video game from a server executing the video game and streaming the audio or video.
  16. The hardware computer-readable medium of claim 16 , wherein: said identifying comprises identifying an audio feature in the audio output by the video game;and said selecting comprises selecting the selected user interface control configuration from a plurality of UI user interface control configurations for the video game based at least on the audio feature.
  17. The hardware computer-readable medium of claim 16 , wherein: said identifying comprises identifying object in the video output by the video game;and said selecting comprises selecting the selected user interface control configuration from a plurality of UI control configurations for the video game based at least on the object.
  18. The hardware computer-readable medium of claim 15 , wherein said identifying comprises: identifying a particular feature used to select the selected user interface control configuration in a stream of input events provided to the video game while the video game is being played.
  19. The hardware computer-readable medium of claim 15 , wherein the graphical overlay is implemented on a touch screen, the acts further comprising: tuning control associated with the selected user interface control configuration corresponding to the graphical overlay for the touch screen, said tuning comprising: linearizing a game input response curve associated with a physical game controller input device, tuning the linearized game input response curve for a touch input of the graphical overlay, and associating the tuned game input response curve with the touch input of the graphical overlay.

Disclaimer: Data collected from the USPTO and may be malformed, incomplete, and/or otherwise inaccurate.