U.S. Pat. No. 11,420,131

SYSTEMS AND METHODS FOR FACILITATING SECRET COMMUNICATION BETWEEN PLAYERS DURING GAME PLAY

AssigneeSony Interactive Entertainment Inc

Issue DateMay 4, 2020

Illustrative Figure

Abstract

Systems and methods for facilitating secret communication between players are described. One of the methods includes receiving image data of a first player to identify one or more gestures and receiving input data indicating a response of a second player to the one or more gestures. The method further includes training a model using the image data and the input data to generate an inference of communication between the first user and the second user. The method also includes generating additional inferences of communication between the first user and the second user. The method includes generating a recommendation to the second user based on the inference of communication and the additional inferences of communication.

Description

DETAILED DESCRIPTION Systems and methods for facilitating secret communication between players during game play are described. It should be noted that various embodiments of the present disclosure are practiced without some or all of these specific details. In other instances, well known process operations have not been described in detail in order not to unnecessarily obscure various embodiments of the present disclosure. FIG. 1is a diagram to illustrate a system100for illustrating game play between users1,2, and3during a play of a game. An example of the game includes a videogame, such as Fortnite™, Minecraft™, and World of Warcraft™. The system100includes a display device104having a camera102. An example of the display device104is a television, such as a smart television. Other examples of the display device104includes a liquid crystal display device (LCD), a light emitting diode (LED) display device, and a plasma display device. An example of the camera102includes a high-speed camera or an event-based camera. An example of the high-speed camera includes a digital camera that records at a rate of at least 6000 frames per second. To illustrate, the high-speed camera records at a rate of 17,000 frames per second or 26,760 frames per second. The high-speed camera can have an internal storage of 72 (gigabytes) GB, 444 GB, or 288 GB, and is expandable to have a storage of 1 to 2 terabytes (TB). As an example, the event-based camera recognizes an event, such as a gesture of the user1, within a field-of-view of the event-based camera, and captures images of the event. To illustrate, the field-of-view has a background that is stationary for a time period and real-world objects that move in the background during the time. Examples of the real-world objects include any of the users1-3, and body parts of the user. Examples of a body part are ...

DETAILED DESCRIPTION

Systems and methods for facilitating secret communication between players during game play are described. It should be noted that various embodiments of the present disclosure are practiced without some or all of these specific details. In other instances, well known process operations have not been described in detail in order not to unnecessarily obscure various embodiments of the present disclosure.

FIG. 1is a diagram to illustrate a system100for illustrating game play between users1,2, and3during a play of a game. An example of the game includes a videogame, such as Fortnite™, Minecraft™, and World of Warcraft™. The system100includes a display device104having a camera102. An example of the display device104is a television, such as a smart television. Other examples of the display device104includes a liquid crystal display device (LCD), a light emitting diode (LED) display device, and a plasma display device. An example of the camera102includes a high-speed camera or an event-based camera. An example of the high-speed camera includes a digital camera that records at a rate of at least 6000 frames per second. To illustrate, the high-speed camera records at a rate of 17,000 frames per second or 26,760 frames per second. The high-speed camera can have an internal storage of 72 (gigabytes) GB, 444 GB, or 288 GB, and is expandable to have a storage of 1 to 2 terabytes (TB). As an example, the event-based camera recognizes an event, such as a gesture of the user1, within a field-of-view of the event-based camera, and captures images of the event. To illustrate, the field-of-view has a background that is stationary for a time period and real-world objects that move in the background during the time. Examples of the real-world objects include any of the users1-3, and body parts of the user. Examples of a body part are provided below. The real-world objects are within the field-of-view of the event-based camera. The event-based camera can be used to capture images and each of the images is processed to have pixels that include data regarding the movement of the real-world objects, but do not have pixels that include data regarding the background, which is stationary. For example, pixels that display the background can be processed by a processor to be black or of a uniform color, texture, and intensity. In this manner, the images captured by the event-based camera are processed to highlight the movement of the real-world objects within the background. As an example, the background includes other real-world objects that are stationary. To illustrate, if the user2is stationary, the images are processed to have pixels that do not include data regarding the user2.

The users1-3are players playing the game. During the play of the game, the users1and3are grouped in a team A and the user2is a member of a team B. The team A is playing against the team B. The users1-3play the game using game controllers1,2, and3. For example, the user1plays the game using a hand-held game controller, the user2plays the game using a hand-held game controller, and the user3plays the game using a hand-held game controller. An example of a hand-held game controller, as described herein, includes a DualShock™ game controller available from Sony corporation. To illustrate, the hand-held game controller includes multiple buttons to control actions of a virtual object in the game, or to achieve a transition from one virtual scene to another virtual scene in the game, or to log out of the game, or to log into the game. As another illustration, the hand-held game controller includes multiple joysticks to control movement of a virtual object in the game. The joysticks are operated by a user, described herein, to control a game character to jump up and down, to run, to walk, to swim, to fly, or hop on a virtual vehicle.

In one embodiment, the game is played by a greater or a lower number of users than that illustrated usingFIG. 1.

In an embodiment, instead of the display device104, each user1-3is using a separate display device, such as a tablet or a computer monitor or a smart phone to play the game.

In one embodiment, a hand-held game controller is a smart phone or a Nintendo™ switch game controller.

In an embodiment, more than one camera, such as the camera102and one or more additional cameras, is used to capture images of gestures that are performed by the users1and3. For example, three cameras are placed in the real-world environment to capture the images of the gestures performed by the users1and3. The cameras are parts of a camera system.

In one embodiment, the camera102is not integrated within the display device104. For example, the camera102is placed below or in front of or above or to a side of the display device104.

In one embodiment, the team A includes any other number of players, such as one, two, three or five players, and the team B includes any other number of players, such as one, two, three, four, five, or six players.

FIG. 2is a diagram of an embodiment of a system200to illustrate a secret communication between the user1and the user3. The communication between the users1and3is secret in that the communication is to be kept secret from the user2. The user2cannot know of the secret communication between the users1and3during the play of the game. For example, the communication between the users1and3is covert and is made by the user1using gestures without talking to the user3. The user2is not shown inFIG. 2to avoid cluttering theFIG. 2.

During the play of the game, such as during one or more gaming sessions, the user1performs three actions, including an action1(A1), and action2(A2), and an action3(A3) to facilitate generation of an occurrence of a virtual scene202of the game. For example, the user1makes facial gestures, such as tilts his/her head to his/her left, tilts his/her neck to his/her left, and winks his/her left eye. Winking of an eye or movement of a lens of the eye is an example of an eye gaze. Other examples of the actions performed by the user1include other facial gestures, such as movement of eyebrows of the user1and movement of lips of the user1. Yet other example of the actions performed by the user1include hand gestures, such as extending out a finger or a thumb or one or more fingers to make a sign. In this example, the user1uses one hand to hold the game controller1and uses the other hand to make a hand gesture, and then reverts back to using both hands to use the game controller1.

As an example, a gaming session is a time interval in which the game is played after the users1-3log into their corresponding user accounts until the users1-3log out of their corresponding user accounts. As another example, a gaming session is a time interval in which the game is played after the users1-3log into their corresponding user accounts until client devices, such as game controllers or smart phones or tablets or computers, that the users1-3operate to play the game get disconnected from a computer network.

The virtual scene202provides or is a game context of the game. For example, the virtual scene202includes virtual objects204,206,208, and210. The virtual object204is a monster, the virtual object206is a game character that is controlled by the user1via the game controller1, and the virtual object208is a game character that is controlled by the user3via the game controller3.

The user1is indicating to the user3by performing the actions A1through A3that the monster is coming and that the virtual object208should run away. During the one or more gaming sessions, the user3notices the actions1through3and in response to the actions A1through A3, the user3controls the game controller3to perform one or more actions, such as an action A4. Examples of the action A4include selection of a button on the game controller3or movement of the joystick of the game controller3or a combination thereof. When the action A4is performed, the game controller3generates control input data, such as control input data310C, which is processed by the game engine316to control the virtual object208to run in a direction212away from the monster204. For example, the user3moves a joystick of the game controller3to control the virtual object208to run away from the monster204. The control input data310C and the game engine316are further described below. The virtual scene202includes that the virtual object208runs in the direction212.

In one embodiment, the user3is presented with a different game context than a game context that is represented to the user1. For example, a virtual scene that is displayed on a display device that is held by the user3is different than a virtual scene that is displayed on a display device held by the user1. The virtual scenes are portions of the same game context. For example, the display device held by the user3displays the virtual objects206,208, and210but does not display the virtual object204. The virtual object204is hidden behind the virtual object210. Also, in this example, the display device held by the user1displays the virtual scene202having the virtual objects204,206,208, and210.

FIG. 3Ais a diagram of an embodiment of a system300to illustrate training of a model304, such as an artificial intelligence (AI) model, using image data308, or control input data310A,310B, or310C, or a combination thereof, generated during the one or more gaming sessions. The system300includes a camera system312, the game controllers1-3, and a processor system302. The processor system302is sometimes referred to herein as a server system. The camera system312includes the camera102and any other cameras that are placed in the real-world environment in which the users1-3are playing the game.

The processor system302includes one or more processors of one or more servers that execute a game312for play by the users1-3. For example, the servers that execute the game312are servers of a data center or a cloud network that executes the game312. The servers are located within one data center or are distributed across multiple data centers. As another example, the game312is executed by a virtual machine, which is implemented using one or more servers of one or more data centers. Examples of the game312are provided above.

The game314includes the game engine316. As an example, the game engine316is a computer software to build and create animation, which includes a virtual scene. The game engine316renders graphics and applies laws of physics, such as collision detection, to virtual objects of the game314. As an illustration, the game engine316is executed to determine positions and orientations of virtual objects displayed in a virtual scene. As another illustration, the game engine316is executed to determine movement of each virtual object in a virtual scene.

Each user1-3accesses the game314after being authenticated by an authentication server of the processor system302and logging into his or her user account. For example, a user provides his or her login information, such as a username and password, via a game controller operated by the user to the processor system302. The authentication server of the processor system302authenticates the login information to allow the user to log into the user account to access the user account. Once the user account is accessed, the user is able to play the game314. The user logs out of his/her user account by selecting a log out button that is displayed on a display screen of the client device.

The game engine316is coupled to an inferred communication engine318, which is a computer software that is executed by the processor system302, which includes an AI processor306. The inferred communication engine318trains the model304and applies the model304to estimate or predict, with greater than a pre-determined amount of probability, an action that will be performed by the user3in response to actions performed by the user1. An example of the pre-determined amount of probability is a probability greater than 50%. Another example of the pre-determined amount of probability is a probability greater than 70%.

An example of the model304is a computer software that receives data inputs and predicts one or more outcomes based on the data inputs. The model304is trained based on the data inputs received over a period of time to predict the one or more outcomes at an end of the period of time. For example, the model304is trained based on the image data308and the control input data310C generated during the one or more gaming sessions.

The processor system302includes an AI processor system320, which includes the AI processor306and a memory device312. Examples of the memory device312include a random access memory. To illustrate, the memory device312is a flash memory or a redundant array of independent disks (RAID). The memory device312is coupled to the AI processor306.

The AI processor306includes a feature extractor322, a classifier324, and the model304. The feature extractor322and the classifier324are computer software programs that are executed by the AI processor306to train the model304.

The camera system312and the game controllers1-3are coupled to the processor system302. For example, the camera system312and the game controllers1-3are coupled to the processor system302via a computer network or a combination of the computer network and a cellular network. Examples of the computer network include the Internet, an intranet, and a combination of the Internet and intranet.

The image data308that is captured by the camera system312during the one or more gaming sessions is provided to the processor system302. An example of image data308includes one or more images of the actions A1-A3performed by the user1. Also, the controller1provides the control input data310A to the processor system302during the one or more gaming sessions. The control input data310A is generated by the controller1when one or more buttons and/or one or more joysticks of the controller1are operated by the user1during the one or more gaming sessions. Similarly, the control input data310B is generated by the controller2when one or more buttons and/or one or more joysticks of the controller2are operated by the user2during the one or more gaming sessions, and the control input data310C is generated by the controller3when one or more buttons and/or one or more joysticks of the controller3are operated by the user3during the one or more gaming sessions.

One or more of the control input data310A,310B, and310C is referred to herein as control input data310, which is stored in the memory device312. Also, the image data308is stored in the memory device312, and game contexts326are stored in the memory device312. The virtual scene202(FIG. 2) is an example of the game contexts326. The game contexts326include one or more additional virtual scenes in addition to the virtual scene202. Each virtual scene, as described herein, is data that includes positions and orientations of multiple virtual objects in the virtual scene. For example, a first virtual scene includes a first position and a first orientation of a first virtual object and a first position and a first orientation of a second virtual object. A second virtual scene includes a second position and a second orientation of the first virtual object and a second position and a second orientation of the second virtual object. The second virtual scene is displayed consecutive in time to the first virtual scene. The second position of the first virtual object is consecutive to the first position of the first virtual object and the second position of the second virtual object is consecutive to the first position of the second virtual object. Also, the second orientation of the first virtual object is consecutive to the first orientation of the first virtual object and the second orientation of the second virtual object is consecutive to the first orientation of the second virtual object. As another example, the game contexts326include a virtual scene that is displayed in time preceding to a display of the virtual scene202.

The AI processor306processes the image data308, the control input data310, and the game contexts326to train the model304. For example, the feature extractor322extracts features f1, f2, f3, and f4from the image data308and the control input data310. An example of a feature of the image data308includes a left eye of the user1, or a right eye of the user1, or a head of the user1, or a left eyebrow of the user1, or a right eyebrow of the user1, or a neck of the user1, or a skeleton of the user1, or a left hand of the user1, or a right hand of the user1, or a left arm of the user1, or a right arm of the user1, or an index finger of the user1, or a combination thereof. The eye, the head, the eyebrow, the neck, the skeleton, the hand, and the arm of the user1are examples of the body part of the user1. An example of a feature of the control input data310includes a movement in a direction of a joystick or a selection of a button of the game controller3. As another example, the feature extractor322identifies, from the image data308, the head of the user1. The feature extractor322identifies the head of the user1from a comparison of the head with a pre-determined shape and a pre-determined size of a head of a person. The pre-determined shape and the pre-determined size of the head are stored within the memory device312. As yet another example, the feature extractor322identifies the neck of the user1from a comparison of the neck with a pre-determined shape and a pre-determined size of a neck of a person. The pre-determined shape and the pre-determined size of the neck are stored within the memory device312. As still another example, the feature extractor322identifies the left eye of the user1from a comparison of the left eye with a pre-determined shape and a pre-determined size of a left eye of a person. The pre-determined shape and the pre-determined size of the left eye are stored within the memory device312. As another example, the feature extractor322identifies from the control input data310that a button labeled “X” is selected on the game controller3operated by the user3compared to another button labeled “O” on the game controller3or compared to movement of a joystick on the game controller3. The control input data310includes an identity, such as a media access control (MAC) address or an alphanumeric identifier, of the game controller3and includes an identity of whether the “X” button or the “O” button or the joystick is used. The control input data310further includes a direction of movement of the joystick of the game controller3and an amount of movement of the joystick. Examples of the direction of movement of the joystick include an up movement, a down moment, a right movement, a clockwise movement, a counterclockwise movement, and a left movement. An example of the amount of movement of the joystick of the game controller3includes an amount greater than a pre-determined amount, which is stored in the memory device312.

During processing of the image data308that is received from the event-based camera, the AI processor306processes, such as interprets, pixels that display the background to be black or of a uniform color, texture, and intensity. The background is stationary and does not change with movement of the users1-3fromFIG. 1. In this manner, the AI processor306processes the image data308captured by the event-based camera to highlight the movement of the users1-3within the background.

The classifier324classifies the features that are identified by the feature extractor322. For example, the classifier324determines that the head of the user1has moved in a pre-determined direction, such as a left direction, by greater than a pre-determined amount. Other examples of the pre-determined direction of movement of the head of the user1include a right direction or an up direction or a down direction or turning in a clockwise direction or turning in a counterclockwise direction. The pre-determined direction and the pre-determined movement of the head are stored in the memory device312. As another example, the classifier324determines that the neck of the user1has moved in a pre-determined direction, such as a left direction, by greater than a pre-determined amount. Other examples of the pre-determined direction of movement of the neck of the user1include a right direction or a forward direction or a backward direction or turning in a clockwise direction or turning in a counterclockwise direction. The pre-determined direction and the pre-determined movement of the neck are stored in the memory device312. As another example, the classifier324determines that an eyelid of the left eye of the user1has moved in a pre-determined direction, such as a down direction, by greater than a pre-determined amount. Other examples of the pre-determined direction of movement of the left eye of the user1include a right direction or an up direction or a left direction or a right direction of movement of an eye lens of the eye of the user1. The pre-determined direction and the pre-determined movement of the left eye are stored in the memory device312.

The classifier324determines that the actions A1through A3are performed by the user1by classifying the features f1through f3and determines the action A4is performed by the user3by classifying the feature f4. The classifier324further determines that the action A4is performed. For example, upon determining that the user1tilts his/her head in the pre-determined direction by greater than the pre-determined amount, the classifier324determines that the user1has performed the action A1of tilting his/her head. As another example, upon determining that the user1moves his/her left eyelid in the pre-determined direction by greater than the pre-determined amount, the classifier324determines that the user1has performed the action A2of winking his/her left eye. As yet another example, upon determining that the user1tilts his/her neck in the pre-determined direction by greater than the pre-determined amount, the classifier324determines that the user1has performed the action A3of moving his/her neck to his/her left. As another example, upon determining that the user3has moved the joystick of the game controller3upward by greater than the pre-determined amount, the classifier324determines that the user3has performed the action A4of moving the joystick of the controller3in the upward direction.

The classifier324determines whether the control input data310C is generated within a pre-determined time period of generation of the image data308. To illustrate, a time of generation of the control input data310C is received with the control input data310C from the controller3by the processor system302. The controller3has a clock source that measures a time at which a joystick of the game controller3is moved or a button of the game controller3is selected by the user3. Examples of a clock source, as used herein, include a clock signal generator, an Internet clock source, and an electronic oscillator. Similarly, a time period of generation of the image data308is received with the image data308from the camera system312by the processor system302. Each camera of the camera system312has a clock source that measures the time period of generation of the image data308. The classifier324determines whether the time of generation of the control input data310C is within a pre-determined limit from an end of the time period of generation of the image data308. Upon determining that the time of generation of generation of the control input data310C is within the pre-determined limit from the end of the time period of generation of the image data308, the classifier324determines that the control input data310C is generated in response to the image data308. On the other hand, upon determining that the time of generation of the input data310C is not within or outside the pre-determined limit from the end of the time period of generation of the image data308, the classifier324determines that the control input data310C is not generated in response to the image data308.

The classifier324further identifies a game context associated with the features f1through f4. For example, the classifier324sends a request to the game engine316to identify positions and orientations of the virtual objects204,206,208, and210(FIG. 2). The game engine316receives the control input data310A from the game controller1and determines a position and an orientation of the virtual object206in the virtual scene202(FIG. 2) based on the control input data310A. Moreover, the game engine316identifies a position and an orientation of the virtual object204and a position and an orientation of the virtual object210during a time period at which the virtual object206has the position and the orientation in the virtual scene202. The processor system302has a clock source, such as an Internet clock source, to determine the time period in which the virtual objects204,206, and208have the corresponding positions and orientations in the virtual scene202. Also, the game engine316receives the control input310C from the game controller3and determines movement of the virtual object208. To illustrate, the game engine316determines that the virtual object208is running away in the direction212(FIG. 2) during the time period. In response to the request received from the classifier324, the game engine316provides the positions and orientations of the virtual objects204,206,208, and210to the classifier324. Upon receiving the positions and orientation, the classifier324identifies the game context of the virtual scene202. The game context of the virtual scene202includes the positions and orientations of the virtual objects204,206, and210. Moreover, the game context of the virtual scene202includes the movement of the virtual object208in the direction212. The game context of the virtual scene202is an example of the game contexts326.

The classifier324associates, such as establishes a correspondence between, the game context, such as the virtual scene202, and a set of the actions A1through A4that are determined as being performed by the users1and3. The actions A1-A4are performed to generate the game context. An example of the correspondence between the game context and the set of actions performed by the users1and3includes a one-to-one relationship or a link or a unique relationship. For example, the classifier324determines that the action A4is performed by the user3in response to the actions A1-A3performed by the user1and that the actions A1-A4are performed to generate the game context to establish a positive correspondence between the actions A1-A3, the action A4, and the game context generated based on the actions A1-A4. The classifier324determines that the action A4is performed in response to the actions A1-A3upon determining that the control input data310C is generated within the pre-determined limit from the end of the time period of generation of the image data308. As another example, the classifier324determines that the action A4is not performed by the user3in response to the actions A1-A3performed by the user1to establish a negative correspondence between the actions A1-A3, the action A4, and the game context generated based on the actions A1-A4. The classifier324determines that the action A4is not performed in response to the actions A1-A3upon determining that the control input data310C is not generated within or outside the pre-determined limit from the end of the time period of generation of the image data308.

The classifier324trains the model304based on the correspondence, such as the positive correspondence or the negative correspondence, that is established between the game context and the actions A1through A4. For example, the classifier324provides data of the game context, such as the virtual scene202, and data indicating the actions A1-A4, and the correspondence to the model304. The model304receives the correspondence established between the game context and the actions A1through A4to generate an inference of communication between the user1and the user3. For example, the model304determines that the user3performs the action A4in response to the actions A1-A3performed by the user1.

During the one or more gaming sessions, additional correspondences, such as positive correspondences and negative correspondences, between additional actions performed by the users1and3and additional game contexts, such as the game contexts326, of the game314are received by the model304from the classifier324. The model304receives the additional correspondences to generate additional inferences of communication between the users1and3. For example, the model304determines that the user3performs the action A4for each additional instance or additional time in response to the actions A1-A3being performed by the user1for the additional instance or additional time.

The correspondence established between the game context and the actions A1through A4and the additional correspondences are examined by the model304to determine a probability that the user3will perform the action A4in response to the user1performing the actions A1-A3. For example, upon receiving a greater number of positive correspondences between the actions A1-A3and the action A4than a number of negative correspondences between the actions A1-A3and the action A4, the model304determines that it is more likely than not, e.g., there is greater than 50% probability, that the user3will perform the action A4to control the virtual object208to run away when the user1performs the actions A1through A3. As another example, upon receiving a number of positive correspondences between the actions A1-A3and the action A4that is greater than a number of negative correspondences between the actions A1-A3and the action A4by a pre-determined amount, the model304determines that there is a 70% or a 90% chance that the user3will perform the action A4to control the virtual object208to run away when the user1performs the actions A1through A3. As yet another example, upon receiving a greater number of negative correspondences between the actions A1-A3and the action A4than a number of positive correspondences between the actions A1-A3and the action A4, the model304determines that it is not likely, e.g., there is less than 50% probability, that the user3will perform the action A4to control the virtual object208to run away when the user1performs the actions A1through A3.

In one embodiment, the inferred communication engine318is executed by one or more servers. For example, the game engine316is executed by one or more servers and the inferred communication engine318is executed by one or more servers. The one or more servers executing the game engine316are the same or different from the one or more servers executing the inferred communication engine318. The one or more servers executing the inferred communication engine318are located within the same data center or a different data center that executes the game engine316.

In an embodiment, instead of the AI processor306, multiple AI processors are used to execute the feature extractor322and the classifier324to train the model304.

In one embodiment, the camera system312and the game controllers1-3are coupled to the processor system302via a cable or via a local radio frequency (RF) wireless connection.

FIG. 3Bis a diagram of an embodiment of a system350to illustrate training of the model304. The system350includes the feature extractor322and the classifier324. The system350also includes the model304. The feature extractor322further includes a facial tracker352, a skeletal tracker354, and a control input tracker356. Moreover, the classifier324includes a facial feature classifier360, a skeletal feature classifier362, a control input feature classifier364, and a game state classifier368. As an example, each tracker352,354, and356is a computer software that is executed by one or more servers. Similarly, as an example, each classifier360,362,364, and368is a computer software that is executed by one or more servers. As another example, each tracker352,354, and356, and each classifier360,362,364, and368is implemented using a combination of hardware and software such as within a field programmable gate array (FPGA) or a programmable logic device (PLD). To illustrate, each tracker352,354, and356, and each classifier360,362,364, and368is a separate logic block on the PLD.

The facial tracker352is coupled to the facial feature classifier360. Also, the skeletal tracker354is coupled to the skeletal feature classifier362. The control input tracker is coupled to the control input feature classifier364. The classifiers360,362,364, and368are coupled to the model304. The classifiers360,362, and364are coupled to the classifier368.

The facial tracker352receives the image data308and identifies facial features, such as the feature f2. The facial features of the user1include features of a face of the user1(FIG. 1). For example, as explained above, the facial tracker352identifies the face of the user1from the image data308, or the left eye of the user1, or the left eyebrow of the user1, or a lip of the user1. To illustrate, the facial tracker352distinguishes the facial features from remaining features of a body of the user1.

Moreover, the skeletal tracker354receives the image data308and identifies skeletal features, such as the features f1and f3, of the body of the user1. As an example, in a manner explained above, the skeletal tracker354distinguishes the body part, such as the neck or a left hand or a right hand, of the user1from other body parts of the user1.

The control input tracker356receives the control input data310and identifies control input features, such as a press of a button on the game controller3or a movement of a joystick on the game controller3(FIG. 1), and the identity of the game controller3. For example, the control input tracker356receives the identity of the game controller3and an identification indicating whether the button or joystick was used on the game controller3from the game controller3.

The facial feature classifier360receives the facial features that are identified by the facial tracker352and classifies the facial features to determines actions performed by the user1. For example, the facial feature classifier360determines that the face of the user1has moved to the left by greater than the predetermined amount. As another example, the facial feature classifier360determines that the left eyelid of the user1has closed for greater than a pre-determined amount of time, which is longer than a normal blink, to determine that the action A2is performed. As yet another example, the facial feature classifier360determines that the left eyelid of the user1closes at a rate slower than a pre-determined rate at which the normal blink occurs to determine that the action A2is performed. As another example, the facial feature classifier360determines that the left eyelid of the user1has closed for greater than the pre-determined amount of time and that the left eyelid of the user1closes at the rate slower than the pre-determined rate at which the normal blink occurs to determine that the action A2is performed.

Also, the skeletal feature classifier362receives the skeletal features that are identified by the skeletal tracker354and classifies the skeletal features. As an example, the skeletal feature classifier362determines that the neck of the user1has moved in a pre-determined direction, such as left, beyond a pre-determined distance to determine that the action A3is performed. As another example, the skeletal feature classifier362determines that the head of the user1is moved and in a pre-determined direction, such as left, beyond a pre-determined distance to determine that the action A1is performed.

The control input feature classifier364classifies the control input features that are identified by the control input tracker356. For example, the control input feature classifier364determines that the joystick of the controller3is moved for greater than a pre-determined amount of distance in a pre-determined direction to determine that the action A4is performed. As another example, the control input features classifier364determines that a button on the controller3is selected consecutively for multiple times for greater than a predetermined frequency to determine that the action A4is performed.

The game state classifier368accesses, from the memory device312(FIG. 3A), a game state1associated with the virtual scene202. Also, the game state classifier360receives the actions A1-A4from the classifiers360,362, and364. The game state1includes positions and orientations of the virtual objects204,206,208, and210in the virtual scene202(FIG. 2). Also, the game state classifier368establishes a correspondence between the game state1and the actions A1through A4. The game state classifier368provides the correspondence between the actions A1-A4and the game state1to the model304to train the model304. Over time, upon receiving multiple correspondences between multiple actions that are performed by the user1and multiple game states of the game314(FIG. 3A), the model304is trained. For example, the model304determines that there is a high probability, which is greater than a pre-determined amount of probability, that the user3will control the game controller3to move the virtual object208in the direction212when the user1performs the actions A1-A3. Examples of the pre-determined amount of probability include a probability of 50% and a probability of 70%.

FIG. 4is a diagram of an embodiment of a system400to illustrate communication of audio data404from the inferred communication engine318to a headphone402that is worn by the user3on his/her head. The system400includes a processor system302and the camera102. It should be noted that the user2is not shown inFIG. 4to avoid clutteringFIG. 4. The user2is present between the users1and3inFIG. 4. After the model304(FIG. 3B) is trained during the one or more gaming sessions, the users1and3are playing the game314(FIG. 3A) another time, such as during an Nth gaming session. For example, the users1-3are playing the game314during the first gaming session in which the image data308(FIG. 3A) is captured and are playing the game314during the Nthgaming session in which image data403is captured by the camera system312(FIG. 3A), where N is an integer greater than one. Each gaming session occurs after logging in by the users1-3to their corresponding user accounts. For example, the users1-3log into their corresponding user accounts to start one of the one or more gaming sessions and again log into their corresponding user accounts to start the Nthgaming session.

During the Nthgaming session, again the users1and3are in the same team and the user2is in a different team. During the Nthgaming session, the camera102captures the image data403of the gestures, such as the actions A1-A3, that are performed by the user1. The gestures performed during the Nthgaming session are sometimes referred to herein as additional gestures. The image data403captured during the Nthgaming session is sometimes referred to herein as additional image data. The image data403is captured during a time period in which the virtual objects,204,206,208, and210are at the positions and orientations illustrated in the virtual scene202(FIG. 2).

However, during the Nthgaming session, the user3does not notice the actions A1-A3that are performed by the user1. For example, the user3is not looking at the user1to notice the actions A1-A3but is rather looking at the display device104(FIG. 1). As another example, during the Nthgaming session, the processor system302does not receive control input data, such as the control input data310C, in response to the actions A1-A3performed during the Nthgaming session. To illustrate, the processor system302determines, during the Nthgaming session, that the control input data310C is not generated during the Nthgaming session within the pre-determined time period from an end of a time period of generation of the image data403during the Nthgaming session. In the illustration, a time of generation of the control input data310C is received with the control input data310C from the controller3by the processor system302during the Nthgaming session. The controller3has the clock source that measures a time at which a joystick of the game controller3is moved or a button of the game controller3is selected by the user3during the Nthgaming session. Similarly, the time period of generation of the image data308is received with the image data308from the camera system312by the processor system302during the Nthgaming session. The classifier324(FIG. 3A) determines whether the time of generation of the control input data310C is within the pre-determined limit from the end of the time period of generation of the image data308during the Nthgaming session. Upon determining that the time of generation of the input data310C is not within or outside the pre-determined limit from the end of the time period of generation of the image data308during the Nthgaming session, the classifier324determines that the control input data310C is not generated during the Nthgaming session in response to the image data308to further determine that the action A4is not performed in response to the actions A1-A3during the Nthgaming session.

The inferred communication engine318receives the image data403captured during the Nthgaming session and determines from the image data403that the actions A1-A3are performed by the user1during the Nthgaming session. The inferred communication engine318executes the model304to determine the probability that the user3will perform the action A4in response to the actions A1-A3performed during the Nthgaming session. For example, the model304is executed to determine that the probability that the user3will control the game controller3to move the virtual object208in the direction212to run away from the monster (FIG. 2) during the Nth gaming session. Upon determining that the probability that the user3will perform the action A4in response to the actions A1-A3performed during the Nthgaming session, the inferred communication engine318generates audio data404including a notification for the user3. The audio data404is an example of a recommendation generated by the inferred communication engine318. During the Nthgaming session, the inferred communication engine318sends the audio data404to the headphone402and to indicate to the user3to control the game controller3to further control the virtual object202to run away in the direction212. For example, the audio data404includes a message “run away”. As another example, the audio data404includes a message “runaway, a monster is coming”. As such, there is secret communication between the users1and3during the play of the game314.

The control input data310C in response to the actions A1-A3is not received during the Nthgaming session until after the model304generates the recommendation to send during the Nthgaming session. For example, during the Nthgaming session, upon listening to sound that is generated based on the audio data404, the user3selects one or more buttons on the game controller3. Upon selection of the one or more buttons during the Nthgaming session, the game controller3generates the control input data310C during the Nthgaming session.

In one embodiment, the image data308and the image data403are generated during the same gaming session. For example, the model is trained based on the image data308during a first one of the one or more gaming sessions and the image data403is generated during the first one of the one or more gaming sessions.

In an embodiment, the systems and methods for facilitating secret communication between players are helpful to a player with speaking disabilities. For example, the user1has a speaking disability, e.g., is mute, or has a speech disorder, or has a speech impediment. The user1makes facial gestures, which are analyzed by the processor system302to generate the audio data404, which is provided to the headphone402worn by the user3. As such, the user1, who has the speaking disability, is able to communicate with the user3by making gestures during the play of the game314. As another example, the user1has the speaking disability and is able to express himself/herself using sign language, and the user3is a friend or family member of the user1. The user3is capable of understanding the sign language. However, the user1is holding his/her game controller1while playing the game314. As such, the user1cannot use his/her hands while playing the game314to communicate with the user3via the sign language. Again, during the play of the game314, the facial gestures made by the user1are analyzed by the processor system302in a manner described above to generate the audio data404, which is sent to the headphone402worn by the user3. This is how the systems and methods described herein facilitate communication between users having speaking disabilities.

FIG. 5is a diagram of an embodiment to illustrate a gaming environment500associated with electronic sports (Esports). The gaming environment500includes multiple computers502,504,506,508,510, and512. Each computer502-512is operated by a corresponding user1,2,3,4,5, and6. For example, the computer502is operated by the user1and the computer504is operated by the user2. To illustrate, the user1logs into his/her user account via the computer502to access the game314(FIG. 3A) and the user2logs into his/her user account via the computer504to access the game314.

Each user1-6is wearing a headphone in the gaming environment500. For example, the user1is wearing a headphone518, the user2is wearing a headphone520, the user3wearing the headphone402, the user4is wearing a headphone522, the user5is wearing a headphone524, and the user6is wearing a headphone526. Also, in the gaming environment500, the users1and3are in the same team as the user5, and the users4,6, and2are in the same team.

The gaming environment500includes one or more display screens, such as a display screen514and a display screen516. The display screens514and516display the game314that is played by the users1-6. For example, the display screens514and516display the virtual scene202(FIG. 2) or another virtual scene that is displayed on display devices of the computers502,504and506. The gaming environment500includes many spectators (not shown) that view the game314that is being played by the users1-6.

The model304(FIG. 3A) is executed during the play of the game314. For example, during the play of the game314, when the user1performs one or more actions, such as the actions A1-A3, the user3is prompted via the headphone402by the inferred communication engine318(FIG. 4) to secretly perform the action A4or to run away from the monster or to fight back or to dance or to jump or to start collecting grenades. The secret communication between the users1and3is not noticeable to the user2, who is in the different team.

FIG. 6Ais a diagram of an embodiment of a system600to illustrate a communication via a router and modem604and a computer network602between the processor system302and multiple devices, which include the camera102and the game controllers1,2, and3. The system600includes the camera102, the game controllers1-3, the router and modem604, the computer network602, and the processor system302. The system600also includes the headphone402and the display device104. The display device104includes a display screen632, such as an LCD display screen, and LED display screen, or a plasma display screen. An example of the computer network602includes the Internet or an intranet or a combination thereof. An example of the router and modem604includes a gateway device. Another example of the router and modem604includes a router device and a modem device.

The camera102includes an image capture circuit606and a communication device608. Details of the image capture circuit606are provided below. The image capture circuit606is coupled to the communication device608, which is coupled to the router and modem604via a wireless connection. Examples of a wireless connection include a Wi-Fi™ connection and a Bluetooth™ connection. The display screen632is coupled to the router and modem604via a wired connection. Examples of a wired connection, as used herein, include a transfer cable, which transfers data in a serial manner, or in a parallel manner, or by applying a universal serial bus (USB) protocol.

The game controller1includes controls610, a digital signal processor system (DSPS)616, and a communication device622. The controls610are coupled to the DSPS616, which is coupled to the communication device622. Similarly, the game controller2includes controls612, a DSPS618, and a communication device624. The controls612are coupled to the DSPS618, which is coupled to the communication device624. Also, the game controller3includes controls614, a DSPS620, and a communication device626. The controls614are coupled to the DSPS620, which is coupled to the communication device626.

Examples of the each of the controls610,612, and614include buttons and joysticks. Examples of each of the communication devices608,622,624and626include a communication circuit that enables communication using a wireless protocol, such as Wi-Fi™ or Bluetooth™, between the communication device and the router and modem604. Other examples of each of the communication devices608,622,624and626include a communication circuit that enables communication using a wired protocol, such as a serial transfer protocol, a parallel transfer protocol, or the USB protocol.

The communication devices622,624, and626are coupled to the router and modem604via a corresponding wireless connection, such as Wi-Fi™ or Bluetooth™ wireless connection. The communication device626is coupled to the headphone402via a wired connection or a wireless connection. Examples of a wireless connection, as used herein, include a connection that applies a wireless protocol, such as a Wi-Fi™ or Bluetooth™ protocol. The router and modem604is coupled to the computer network602, which is coupled to the processor system302.

During the play of the game314(FIG. 3A), the processor system302generates image frame data from one or more game states of the game314and applies a network communication protocol, such as transfer control protocol over Internet protocol (TCP/IP), to the image frame data to generate one or more packets and sends the packets via the computer network602to the router and modem604. The modem of the router and modem604applies the network communication protocol to the one or more packets received from the computer network602to obtain or extract the image frame data, and provides the image frame data to the router of the router and modem604. The router routes the image frame data via the wired connection between the router and the display screen632to the display screen632for display of one or more images of the game314based on the image frame data received within the one or more packets.

During the display of one or more images of the game314, the image capture circuit606captures one or more images of the real-world objects, such as images of the users1-3(FIG. 1), in front of the camera102to generate image data605, such as the image data308generated during the one or more gaming sessions or the image data403generated during the Nth gaming session, and provides the image data605to the communication device608. The communication device608applies the wireless protocol to the image data605to generate one or more wireless packets and sends the wireless packets to the router and modem604.

The controls614of the game controller3are selected or moved by the user3to generate the control input signals, which are processed by the DSPS620. The DSPS620processes, such as measures or samples or filters or amplifies or a combination thereof, the control input signals to output the control input data310C. For example, the DSPS620identifies a button of the game controller3selected by the user3. As another example, the DSPS620identifies whether a joystick of the game controller3is moved or a button of the game controller3is selected by the user3. The control input data310C is sent from the DSPS620to the communication device626. The communication device626applies the wireless protocol to the control input data310to generate one or more wireless packets and sends the wireless packets to the router and modem604. In a similar manner, wireless packets are generated by the communication devices622and624of the game controllers1and2.

The router of the router and modem604receives the wireless packets from the communication devices608,622,624, and626, and applies the wireless protocol to obtain or extract the image data605and the control input data310from the wireless packets. The router of the router and modem604provides the image data605and the control input data310to the modem of the router and modem604. The modem applies the network communication protocol to the image data605and the control input data310to generate one or more network packets. For example, the modem determines that the image data605and the control input data310B sent to the processor system302that is executing the game314, and embeds a network address of the processor system302within the one or more network packets. The modem sends the one or more network packets via the computer network602to the processor system302.

The processor system302applies the network communication protocol to the one or more network packets received from the router and modem604to obtain or extract the image data605and the control input data310, and processes the image data605and the control input data310in a manner explained above to train the model304. The processor system302generates the audio data404(FIG. 4) and applies the network communication protocol to the audio data404to generate one or more network packets. The processor system302sends the one or more network packets via the computer network602to the router and modem604.

The modem of the router and modem604applies the network communication protocol to the one or more network packets received via the computer network602to obtain or extract the audio data404. The router of the router and modem604applies the wireless protocol to the audio data404to generate one or more wireless packets and sends the wireless packets to the communication device626of the game controller3. The communication device626of the game controller3applies the wireless protocol to the one or more wireless packets received from the router and modem604to obtain or extract the audio data404and sends the audio data404to the headphone402for output of the audio data404as sound to the user3. For example, the communication device626of the game controller3applies the wired protocol to generate one or more packets having the audio data404and sends the one or more packets via the wired connection to the headphone402. As another example, the communication device626of the game controller3applies the wireless protocol to generate one or more wireless packets and sends the one or more wireless packets via the wireless connection to the headphone402.

In one embodiment, each communication device608,622,624, and626communicates with the router and modem604via a wired connection, such as a cable.

In one embodiment, the display screen632is coupled to the router and modem604via the communication device608. For example, the display screen632is coupled to the communication device608. The router604applies the wireless protocol to the image frame data received via the computer network602to generate one or more wireless packets and sends the one or more wireless packets to the communication deice608. The communication device608applies the wireless protocol to the one or more wireless packets to extract or obtain the image frame data and sends image frame data to the display screen632for display of one or more images of the game314.

FIG. 6Bis a diagram of an embodiment of a system640to illustrate a communication between the processor system302and multiple devices that include the camera102and the game controllers1-3via the computer network602, the router and modem604, a game console642, and a server system644. An example of the game console642is a video game console or a computer or a combination of a central processing unit (CPU) and a graphics processing unit (GPU). To illustrate, the game console642is a Sony PlayStation™ or a Microsoft Xbox™. The game console642includes the processor system302and a communication device646, such as Wi-Fi™ communication device or a Bluetooth™ communication device. As an example, a processor system as used herein, includes one or more CPUs and one or more GPUs, and the one or more CPUs are coupled to the one or more GPUs.

An example of the server system644includes one or more servers within one or more data centers. A server, as used herein, and can be a game console. As an example, the server system644includes one or more virtual machines. The communication device646is coupled to the communication device608(FIG. 6A) of the camera102via a wireless connection, such as a Wi-Fi™ connection or a Bluetooth™ connection. Moreover, the communication device646is coupled to the communication device622of the game controller1via a wireless connection, is coupled to the communication device624of the game controller2via a wireless connection, and is coupled to the communication device626of the game controller3via a wireless connection. The communication device646is coupled to the processor system302, such as the AI processor306(FIG. 3A) and one or more game processors, such as a CPU and a GPU, that execute the game314(FIG. 3A). The processor system302is coupled to the router and modem604via a wired connection. The router and modem604is coupled via the computer network602to the server system644.

The processor system302instead of or in conjunction with the server system644executes the game314for display of virtual scenes on the display screen632. For example, in response to receiving login information that is provided by the user1via the game controller1, the processor system302sends a request to the server system644via the computer network602to determine whether the login information is valid. Upon receiving an indication from the server system644via the computer network602that the login information received from the game controller1is valid, the processor system302executes the game314for play of the game314by the user1via the game controller1and the game console642. On the other hand, upon receiving an indication from the server system644via the computer network602that the login information received from the game controller1is invalid, the processor system302does not execute the game314for play by the user1via the game controller1and the game console642.

Similarly, as another example, in response to receiving login information that is provided by the user3via the game controller3, the processor system302sends a request to the server system644via the computer network602to determine whether the login information is valid. Upon receiving an indication from the server system644via the computer network602that the login information received from the game controller3is valid, the processor system302executes the game314for play of the game314by the user3via the game controller3and the game console642. On the other hand, upon receiving an indication from the server system644via the computer network602that the login information received from the game controller3is invalid, the processor system302does not execute the game314for play by the user3via the game controller3and the game console642.

The communication device646receives the wireless packets having the image data605and the control input data310from the camera102and the game controllers1-3, and applies the wireless protocol to the wireless packets to extract the image data605and the control input data310from the wireless packets, and provides the image data605and the control input data310to the processor system302. The processor system302trains the model304(FIG. 3A) based on the image data605and the control input data310in a manner described above, and generates the audio data404. The processor system304provides the audio data404to the communication device646. The communication device646applies the wireless protocol to the audio data404to generate one or more wireless packets and sends the wireless packets to the communication device626of the game controller3.

In one embodiment, some of the functions described herein as being performed by the processor system302are performed by a processor system of the game console642and the remaining functions, described herein as being performed by the processor system302, are instead performed by the server system644.

FIG. 6Cis a diagram of an embodiment of a system660to illustrate communication between the processor system302including the model304(FIG. 3A) and multiple smart phones1,2, and3. The system660includes the smart phones1,2, and3, the router and modem604, the computer network602, the processor system302, and the headphone402. The smartphones1-3are coupled to the router and modem604, which is coupled via the computer network602to the processor system302. Each of the smart phones1-3is coupled to the router and modem604via a wireless connection, such as a Wi-Fi™ connection or a Bluetooth™ connection. The headphone402is coupled to the smart phone3via a wired connection or a wireless connection.

The user1operates the smart phone1. Similarly, the user2operates the smart phone2and the user3operates the smart phone3. Each of the smart phones1,2, and3displays the game314(FIG. 3A), which is being executed by the processor system302. For example, each of the smart phones1-3displays a virtual scene of the game314. Each smart phone1-3has a camera for capturing an image of the real-world objects within a field-of-view of the camera.

During the play of the game314, the smart phone1generates the control input data310A when operated by the user1. For example, the smart phone1displays a virtual joystick and one or more virtual buttons on its display device. When the virtual joystick board the one or more virtual buttons are used by the user1, the control input data310A is generated. Also, the camera of the smart phone1captures the image data605of the gestures that are performed by the user1. Similarly, during the play of the game314, the smart phone2generates the control input data310B when operated by the user2, and the smart phone3generates a control input data310C when operated by the user3.

The smart phone1applies a wireless protocol to packetize the image data605into one or more wireless packets and the one or more wireless packets are sent from the smart phone1to the router and modem604. The router and modem604performs the functions described above to obtain the image data605and to generate one or more network packets including the image data605, and sends the one or more network packets via the computer network602to the processor system302.

Also, the smart phone3applies a wireless protocol to packetize the control input data310C into one or more wireless packets and the one or more wireless packets are sent from the smart phone3to the router and modem604. The router and modem604performs the functions described above to obtain the control input data310C and to generate one or more network packets including the control input data310C, and sends the one or more network packets via the computer network602to the processor system302.

The processor system302receives the network packets having the image data605and the control input data310C, trains the model304as explained in the manner described above, and generates the audio data404. The processor system302packetizes the audio data404into one or more network packets and sends the one or more network packets via the computer network602to the router and modem604. The router and modem604performs the functions described above to obtain the audio data404from the one or more network packets, and generates and sends one or more wireless packets including the audio data404to the smart phone3.

The smart phone3receives the one or more wireless packets, applies the wireless protocol to the one or more wireless packets to extract or obtain the audio data404, and provides the audio data404to the headphone402via the wireless or wired connection. For example, the smart phone3applies the wired protocol to generate one or more packets having the audio data404and sends the one or more packets via the wired connection to the headphone402. As another example, the smart phone3applies the wired protocol to generate one or more wireless packets and sends the one or more wireless packets via the wireless connection to the headphone402. The headphone402outputs sound representing the audio data404to the user3to secretly communicate a message from the user1to the user3.

In one embodiment, instead of a smart phone, a game computer, such as a desktop computer, or a laptop computer, or a tablet, is used.

In one embodiment, instead of a smart phone, a head-mounted display (HMD) is used. For example, the user1is wearing an HMD on his/her head, the user2is wearing an HMD on his/her head, and the user3is wearing an HMD3on his/her head. Each HMD is used with a corresponding game controller. For example, the HMD worn by the user1is used with the game controller1and the HMD worn by the user2is used with the game controller2.

In an embodiment, a smart phone acts as a game controller. For example, the user1is wearing his/her HMD and using the smart phone1as a game controller, and the user2is wearing his/her HMD and using the smart phone2as a game controller.

FIG. 6Dis a diagram of an embodiment of a system670to illustrate communication between the smart phones1-3and the processor system404via the computer network602without using the router and modem602between the computer network602and the smart phones1-3. The system670includes the smart phones1-3, a cellular network672, the headphone402, the computer network602, and the processor system302.

Each smart phone1-3is coupled to the cellular network672via a cellular wireless connection, such as a fourth-generation cellular wireless (4G) connection or a fifth cellular wireless (5G) connection. The cellular network672is coupled to the computer network602, which is coupled to the processor system302.

The smart phone1generates one or more packets by applying a cellular communication protocol, such as the 4G or the 5G protocol, to the image data308and sends the one or more packets to the cellular network672. The cellular network672receives the one or more packets and applies the cellular communication protocol to obtain or extract the image data605, and applies the network communication protocol to the image data605to generate one or more network packets. The one or more network packets generated by the cellular network672are sent via the computer network602to the processor system302. The processor system302processes the one or more network packets received from the cellular network602in a manner described above to generate the audio data404, and sends one or more network packets including the audio data404via the computer network602to the cellular network672.

The cellular network672applies the network communication protocol to the one or more network packets received from the processor system302to extract or obtain the audio data404, and applies the cellular communication protocol to the audio data404to generate one or more packets. The cellular network672sends the one or more packets including the audio data404to the smart phone3.

FIG. 7Ais a diagram of an embodiment of the headphone402. The headphone402includes a communication device702, a digital-to-analog (D/A) converter704, an audio amplifier706, and a speaker708. An example of the communication device702is a communication circuit that applies the wire protocol or the wireless protocol.

The communication device702is coupled to the communication device626(FIG. 6A) of the game controller3or to the smart phone3(FIG. 6C). The digital-to-analog converter704is coupled to the communication device702and the audio amplifier706is coupled to the digital-to-analog converter704. Also, the speaker708is coupled to the audio amplifier706.

The communication device702receives one or more packets having the audio data404from the communication device626or from the smart phone3, and applies a protocol, such as the wired protocol or the wireless protocol, to extract or obtain the audio data404from the one or more packets. The communication device702sends the audio data404to the digital-to-analog converter704. The digital-to-analog converter704converts the audio data404from a digital format to an analog format to output analog audio signals. The digital-to-analog converter704sends the analog audio signals output based on the audio data404to the audio amplifier706. The audio amplifier706amplifies, such as increases an amplitude or a magnitude, of the analog audio signals to output amplified audio signals, which are electrical signals. The speaker708converts electrical energy of the amplified audio signals into sound energy to output sounds to be heard by the user3(FIG. 1).

In one embodiment, instead of the speaker708, multiple speakers are used.

FIG. 7Bis a diagram of an embodiment of a haptic feedback system710to illustrate that instead of the audio data404, haptic feedback data718is received by the haptic feedback system710to provide haptic feedback to the user3(FIG. 1). The haptic feedback data718is another example of the recommendation generated by the inferred communication engine318. The haptic feedback system710includes a communication device720, a driver system712, a motor system714, and a haptic feedback device716. An example of the communication device720is a communication circuit that applies the wired or wireless protocol. To illustrate, the communication device626(FIG. 6A) is an example of the communication device720. For example, when the haptic feedback system710is embedded within the game controller3, the communication device626is an example of the communication device720. As another illustration, the communication device626(FIG. 6A) is a part of the smart phone3. For example, when the haptic feedback system710is embedded within the smart phone3, the communication device626is an example of a wireless access card (WAC) of the smart phone3. An example of the driver system712includes one or more transistors, and the transistors are coupled to each other. An example of the motor system714includes one or more electric motors, such as direct current (DC) motors or alternating current (AC) motors. Each electric motor includes a stator and a rotor. An example of the haptic feedback device716includes a metal object or a plastic object that is in contact with the user3.

Instead of or in addition to generating the audio data404, the processor302generates the haptic feedback data718based on the training of the model304(FIG. 3A). The processor system302generates one or more packets having the haptic feedback data718in the same manner in which the processor system302generates one or more packets having the audio data404and sends the haptic feedback data718for receipt by the communication device720. For example, with reference toFIG. 6A, the processor system302applies the network communication protocol to the haptic feedback data718to generate one or more network packets and sends the one or more network packets via the computer network602to the router and modem604. The router and modem604processes the one or more network packets having the haptic feedback data718in the same manner in which the router and modem604processes the one or more network packets having the audio data404, and applies the wireless protocol to the haptic feedback data718to generate one or more wireless packets, and sends the one or more wireless packets to the communication device626of the game controller3. As another example, with reference toFIG. 6B, the processor system302of the game console642applies the wireless protocol to the haptic feedback data718to generate one or more wireless packets, and sends the one or more wireless packets to the communication device626of the game controller3. As yet another example, with reference toFIG. 6C, the router and modem604processes the one or more network packets having the haptic feedback data718in the same manner in which the router and modem604processes the one or more network packets having the audio data404, and applies the wireless protocol to the haptic feedback data718to generate one or more wireless packets, and sends the one or more wireless packets to the smart phone3. As another example, with reference toFIG. 6D, the cellular network672receives one or more network packets having the haptic feedback data718via the computer network602from the processor system302to obtain the haptic feedback data718from the one or more network packets, and applies the cellular communication protocol to the haptic feedback data718to generate one or more packets. The cellular network672sends the one or more packets having the haptic feedback data718to the smart phone3.

Referring back toFIG. 7B, the communication device720receives one or more packets having the haptic feedback data718and applies a protocol, such as the wired protocol, or the wireless protocol, or the cellular communication protocol, to extract or obtain the haptic feedback data718from the one or more packets, and sends the haptic feedback data718to the driver system712. Upon receiving the haptic feedback data718, the driver system712generates one or more current signals and applies the one or more current signals to corresponding one or more electric motors of the motor system714. The one or more rotors of the one or more electric motors of the motor system714rotate to move, such as vibrate, the haptic feedback device716. When the haptic feedback device716is in contact with the user3, the user3feels to the motion or moment of the haptic feedback device716.

FIG. 7Cis a diagram of an embodiment of a display device732to illustrate a display of a message730on a display screen736. The display device732includes a communication device734and the display screen736. Examples of the display device732include an LCD display device, an LED display device, and a plasma display device. Examples of the display screen736include an LCD display screen, and LED display screen, and a plasma display screen. To illustrate, the display device732is a display device of the smart phone3(FIG. 6C) or of the game controller3or of a tablet or of a computer. Examples of a computer include a desktop computer and a laptop computer. Examples of the communication device734include a communication circuit that applies the wired or wireless protocol for communication of data. The communication device734is coupled to the display screen736. The communication device626(FIG. 6A) is an example of the communication device734.

Instead of or in addition to generating other forms of data, such as the audio data404and the haptic feedback data718, the processor system302generates image frame data738based on the training of the model304(FIG. 3A). The image frame data738is another example of the recommendation generated by the inferred communication engine318. In the same manner in which the processor system302generates one or more packets having the audio data404, the processor system302generates one or more packets by applying a protocol, such as the network communication protocol, the wired protocol, or the wireless protocol, to the image frame data738and sends the one or more packets to the display device732. For example, with reference toFIG. 6A, the processor system302applies the network communication protocol to the image frame data738to generate one or more network packets and sends the one or more network packets via the computer network602to the router and modem604. The router and modem604processes the one or more network packets having the image frame data738in the same manner in which the router and modem604processes the one or more network packets having the audio data404to obtain the image frame data738from the one or more network packets, applies the wireless protocol to the image frame data738to generate one or more wireless packets, and sends the one or more wireless packets to the communication device626. As another example, with reference toFIG. 6B, the processor system302of the game console642applies the wireless protocol to the image frame data738to generate one or more wireless packets, and sends the wireless packets to the communication device626of the game controller3. As yet another example, with reference toFIG. 6C, the router and modem604processes the one or more network packets having the image frame data738in the same manner in which the router and modem604processes the one or more network packets having the audio data404to obtain the image frame data738, applies the wireless protocol to the image frame data738to generate one or more wireless packets, and sends the one or more wireless packets to the smart phone3. As another example, with reference toFIG. 6D, the cellular network672receives one or more network packets having the image frame data738via the computer network602from the processor system302and applies the network communication protocol to extract the image frame data738from the one or more network packets, and applies the cellular communication protocol to the image frame data738to generate one or more packets. The cellular network672sends the one or more packets having the image frame data738to the smart phone3.

Referring back toFIG. 7C, the communication device734receives the one or more packets having the image frame data738and applies a protocol, such as the cellular communication protocol, the wired protocol, or the wireless protocol, to extract or obtain the image frame data738from the one or more packets, and sends the image frame data738to the display screen736. Upon receiving the image frame data738, the display screen736displays the message730. When the user3views the message730, such as “Run Away!”, the user3controls the virtual object208to run away in the direction212(FIG. 2).

FIG. 8is a diagram of an embodiment of a system800to illustrate components of a game controller802. The system800includes the game controller802and a communication device804. Any of the game controllers1-3is an example of the game controller802. Examples of the communication device804include a communication circuit that applies a protocol, such as the cellular communication protocol, or the wired protocol, or the wireless protocol. The game controller802includes joysticks1and2, and multiple buttons1,2,3, and4. Also, the game controller802includes a clock source804, a gyroscope806, a magnetometer808, and an accelerometer810. The game controller802includes an identifier circuit812. An example of the clock source804includes a clock oscillator or an electronic oscillator or a digital pulse generator. An example of the identifier812includes a combination of an analog-to-digital converter and a processor, such as a programmable logic device (PLD), an application specific integrated circuit (ASIC), or a digital signal processor (DSP). To illustrate, the identifier812includes an analog-to-digital converter that is coupled to the PLD or the ASIC or the DSP. As another example, any of the DSPS616,618, and620(FIG. 6A) is an example of the identifier812. Also, the identifier812has a clock input for receiving a clock signal from the clock source804.

Each of the joystick1and the joystick2is coupled to the gyroscope806. Moreover, each of the joystick1and the joystick2is coupled to the magnetometer808. Each of the joystick1and the joystick2is coupled to the accelerometer810. Each of the clock source804, joystick1, the joystick2, the button1, the button2, the button3, the button4, the gyroscope806, the magnetometer808, and the accelerometer810are coupled to the identifier812. The identifier812is coupled to the communication device804.

The identifier812operates in synchronization with the clock signal generated by the clock source804. When one or more of the joysticks1and2are moved by a user, such as the user3, the gyroscope806, and the magnetometer808, and the accelerometer810generate one or more analog signals representing position and orientation information of the one or more of the joysticks1and2, and send the one or more analog signals to the identifier812. The position and orientation information of a joystick includes a position, such as an (x, y, z) co-ordinate, with respect to a reference co-ordinate, e.g., (0, 0, 0), that is located at a point on the game controller802, and the joystick moves with respect to or about the point. It should be noted that the location of the point is at an intersection of an x-axis, a y-axis, and a z-axis. The position and orientation information of the joystick includes an orientation, such as (θ, ϕ, γ), of the joystick. The angle θ of the joystick is with respect to the x-axis, the angle ϕ of the joystick is with respect to the y-axis, and the angle γ of the joystick is with respect to the z-axis. The analog-to-digital converter of the identifier812converts the one or more analog signals to corresponding one or more digital signals, and the processor of the identifier812processes the one or more digital signals to determine the position and orientation information of one or more of the joysticks1and2.

Moreover, the identifier812receives one or more analog signals that are generated when one or more of the joysticks1and2are moved by the user and the analog-to-digital converter of the identifier812converts the analog signals to corresponding one or more digital signals. The processor of the identifier812receives the one or more digital signals identifying one or more of the joysticks1and2that are moved, and processes the one or more digital signals to identify the one or more of the joysticks. For example, when the processor of the identifier812receives a digital signal via a first channel, such as a wire, that couples the joystick1to the identifier812, the processor determines that the joystick1is moved. When the processor of the identifier812receives a digital signal via a second channel, such as a wire, that couples the joystick2to the identifier812, the processor determines that the joystick2is moved. The processor of the identifier812measures, based on the clock signal, a time at which the joystick1is moved by the user and a time at which the joystick2is moved by the user.

Similarly, the identifier812receives one or more analog signals that are generated when one or more of the buttons1-4are selected by the user and the analog-to-digital converter of the identifier812converts the analog signals to corresponding one or more digital signals. The processor of the identifier812receives the one or more digital signals identifying one or more of the buttons1-4that are selected, and processes the one or more digital signals to identify the one or more of the buttons1-4. For example, when the processor of the identifier812receives a digital signal via a third channel, such as a wire, that couples the button1to the identifier812, the processor determines that the button1is selected. When the processor of the identifier812receives a digital signal via a fourth channel, such as a wire, that couples the button2to the identifier812, the processor determines that the button2is selected. The processor of the identifier812measures, based on the clock signal, a time at which any of the buttons1-4is selected by the user. The identity of the game controller802, the identification of one or more of the joysticks1and2that are moved, the identification of one or more of the buttons1-4that are selected, the times at which one or more of the joysticks1and2are moved, the times at which one or more of the buttons1-4are selected, and the position and orientation information of one or more of the joysticks1and2that are moved is an example of the control input data310A, or the control input data310B, or the control input data310C.

The identifier812provides the control input data310A,310B, or310C to the communication device804. The communication device804applies the protocol, such as the cellular communication protocol, or the wired protocol, or the wireless protocol, to the control input data310A,310B, or310C to generate one or more packets, and sends the one or more packets to the processor system302. For example, with reference toFIG. 6A, the communication device622applies the wireless protocol to the control input data310A to generate one or more wireless packets and sends the wireless packets to the router and modem604. As another example, with reference toFIG. 6A, the communication device626applies the wireless protocol to the control input data310C to generate one or more wireless packets and sends the wireless packets to the router and modem604.

In one embodiment, the communication device804is a part of or located within the game controller802. For example, any of the communication devices622,624, and626(FIG. 6A) is an example of the communication device804.

FIG. 9is a diagram of an embodiment of the camera102to illustrate generation and transfer of the image data605from the camera102to the processor system302. The camera102includes a lens902, a detector904, a signal processor906, a clock source908, and the communication device608. An example of the detector904includes one or more photodiodes that detect light and convert the light into electrical signals. An example of the signal processor906includes a combination of an analog-to-digital converter and a processing component, such as an ASIC, a PLD, or a microprocessor, or a digital signal processor. The analog-to-digital converter of the signal processor906is coupled to the processing component of the signal processor906. An example of the clock source908is a digital signal generator or an electronic oscillator.

The detector904is interfaced with the lens902and coupled to the signal processor906, which is coupled to the clock source908. Also, the signal processor906is coupled to the communication device608. The lens902focuses light that is reflected by one or more of the real-world objects in front of the camera102, and the light is focused on the detector904. The detector904converts light energy of the light into electrical energy of the electrical signals, and provides electrical signals to the signal processor906. The analog-to-digital converter of the signal processor906converts the electrical signals from an analog form to a digital form to output digital signals. The processing component of the signal processor906receives the digital signals from the analog-to-digital converter of the signal processor906and generates the image data605from the digital signals.

Also, the processing component of the signal processor906measures the time period, which includes times, at which the image data605is generated. For example, based on a clock signal received from the clock source908, the processing component determines a time at which the action A1is captured within the image data605, a time at which the action A2is captured within the image data605, and a time at which the action A3is captured within the image data605. The processing component provides the image data605and the time period during which the image data605is generated to the communication device608for sending one or more packets including the time period and the image data605to the processor system302.

FIG. 10is a flow diagram conceptually illustrating various operations which are performed for streaming a cloud video game to a client device, in accordance with implementations of the disclosure. Examples of the client device include a game controller, a smart phone, a game console, and a computer. A game server1002executes a video game and generates raw (uncompressed) video1004and audio1006. The image data605(FIG. 6A) is an example of the video1004. The game server1002is an example of the processor system302(FIG. 3A). The video1004and audio1006are captured and encoded for streaming purposes, as indicated at reference1008in the illustrated diagram. The encoding provides for compression of the video and audio streams to reduce bandwidth usage and optimize the gaming experience. Examples of encoding formats include H.265/MPEG-H, H.264/MPEG-4, H.263/MPEG-4, H.262/MPEG-2, WMV, VP6/7/8/9, etc.

Encoded audio1010and encoded video1012are further packetized into network packets, as indicated at reference numeral1014, for purposes of transmission over a computer network1020, which is an example of the computer network602(FIG. 6A). In some embodiments, the network packet encoding process also employs a data encryption process, thereby providing enhanced data security. In the illustrated implementation, audio packets1016and video packets1018are generated for transport over the computer network1020.

The game server1002additionally generates haptic feedback data1022, which is also packetized into network packets for network transmission. The haptic feedback data718(FIG. 7B) is an example of the haptic feedback data1022. In the illustrated implementation, haptic feedback packets1024are generated for transport over the computer network1020.

The foregoing operations of generating the raw video and audio and the haptic feedback data are performed on the game server1002of a data center, and the operations of encoding the video and audio, and packetizing the encoded audio/video and haptic feedback data for transport are performed by the streaming engine of the data center. As indicated at reference1020, the audio, video, and haptic feedback packets are transported over the computer network. As indicated at reference1026, the audio packets1016, video packets1018, and haptic feedback packets1024, are disintegrated, e.g., parsed, etc., by a client device to extract encoded audio1028, encoded video1030, and haptic feedback data1022at the client device from the network packets. If data has been encrypted, then the data is also decrypted. The encoded audio1028and encoded video1030are then decoded by the client device, as indicated at reference1034, to generate client-side raw audio and video data for rendering on a display device1040of the client device. The haptic feedback data1022is processed by the processor of the client device to produce a haptic feedback effect at a controller device1042or other interface device, e.g., the HMD, etc., through which haptic effects can be rendered. One example of a haptic effect is a vibration or rumble of the controller device1042.

It will be appreciated that a video game is responsive to user inputs, and thus, a similar procedural flow to that described above for transmission and processing of user input, but in the reverse direction from client device to server, is performed. As shown, a controller device1042or another input device, e.g., the body part of the user1, etc., or a combination thereof generates input data1048. Any of the control input data310A-310C (FIG. 3A) is an example of the input data1048. The controller device1042is an example of any of the game controllers1-3(FIG. 3A). This input data1048is packetized at the client device for transport over the computer network to the data center. Input data packets1046are unpacked and reassembled by the game server1002to define the input data1048on the data center side. The input data1048is fed to the game server1002, which processes the input data1048to generate a game state of the game.

During transport via the computer network1020of the audio packets1016, the video packets1018, and haptic feedback packets1024, in some embodiments, the transmission of data over the computer network1020is monitored to ensure a quality of service. For example, network conditions of the computer network1020are monitored as indicated by reference1050, including both upstream and downstream network bandwidth, and the game streaming is adjusted in response to changes in available bandwidth. That is, the encoding and decoding of network packets is controlled based on present network conditions, as indicated by reference1052.

FIG. 11is a block diagram of an embodiment of a game console1100that is compatible for interfacing with a display device of the client device and is capable of communicating via the computer network1020with a game hosting system, such as the sever system644(FIG. 6B). The game console642(FIG. 6B) is an example of the game console1100. The game console1100is located within a data center A or is located at a location at which the users1-3are located. In some embodiments, the game console1100is used to execute a game that is displayed on an HMD1105. The game console1100is provided with various peripheral devices connectable to the game console1100. The game console1100has a cell processor1128, a dynamic random access memory (XDRAM) unit1126, a Reality Synthesizer graphics processor unit1130with a dedicated video random access memory (VRAM) unit1132, and an input/output (I/O) bridge1134. The game console1100also has a Blu Ray® Disk read-only memory (BD-ROM) optical disk reader1140for reading from a disk1140aand a removable slot-in hard disk drive (HDD)1136, accessible through the I/O bridge1134. Optionally, the game console1100also includes a memory card reader1138for reading compact flash memory cards, memory Stick® memory cards and the like, which is similarly accessible through the I/O bridge1134. The I/O bridge1134also connects to Universal Serial Bus (USB) 2.0 ports1124, a gigabit Ethernet port1122, an IEEE 802.11b/g wireless network (Wi-Fi) port1120, and a Bluetooth® wireless link port1118capable of supporting Bluetooth connections.

In operation, the I/O bridge1134handles all wireless, USB and Ethernet data, including data from game controllers842and/or1103and from the HMD1105. For example, when any of the users1-3is playing the game generated by execution of a portion of a game code, the I/O bridge1134receives input data from the game controllers842and/or1103and/or from the HMD1105via a Bluetooth link and directs the input data to the cell processor1128, which updates a current state of the game accordingly. As an example, a camera within the HMD1105captures a gesture of any of the users1-3to generate an image representing the gesture. The image is an example of the input data. Each game controller842and1103is an example of the game controller1, the game controller2, or the game controller3. Each of the game controllers1-3is an example of a hand-held controller (HHC).

The wireless, USB and Ethernet ports also provide connectivity for other peripheral devices in addition to the game controllers842and1103and the HMD1105, such as, for example, a remote control1104, a keyboard1106, a mouse1108, a portable entertainment device1110, such as, e.g., a Sony Playstation Portable® entertainment device, etc., a video camera, such as, e.g., an EyeToy® video camera1112, etc., a microphone headset1114, and a microphone1115. The portable entertainment device1110is an example of any of the game controllers1-3. In some embodiments, such peripheral devices are connected to the game console1100wirelessly, for example, the portable entertainment device1110communicates via a Wi-Fi ad-hoc connection, whilst the microphone headset1114communicates via a Bluetooth link.

The provision of these interfaces means that the game console1100is also potentially compatible with other peripheral devices such as digital video recorders (DVRs), set-top boxes, digital cameras, portable media players, Voice over Internet protocol (IP) telephones, mobile telephones, printers and scanners.

In addition, a legacy memory card reader1116is connected to the game console1100via the USB port1124, enabling the reading of memory cards1148of a kind used by the game console1100. The game controllers842and1103and the HMD1105are operable to communicate wirelessly with the game console1100via the Bluetooth link1118, or to be connected to the USB port1124, thereby also receiving power by which to charge batteries of the game controller842and1103and the HMD1105. In some embodiments, each of the game controllers842and1103and the HMD1105includes a memory, a processor, a memory card reader, permanent memory, such as, e.g., flash memory, etc., light emitters such as, e.g., an illuminated spherical section, light emitting diodes (LEDs), or infrared lights, etc., microphone and speaker for ultrasound communications, an acoustic chamber, a digital camera, an internal clock, a recognizable shape, such as, e.g., a spherical section facing the game console1100, and wireless devices using protocols, such as, e.g., Bluetooth, Wi-Fi, etc.

The game controller842is a controller designed to be used with two hands of any of the users1-3, and game controller1103is a single-hand controller with an attachment. The HMD1105is designed to fit on top of a head and/or in front of eyes of the any of the users1-3. In addition to one or more analog joysticks and conventional control buttons, each game controller842and1103is susceptible to three-dimensional location determination. Similarly, the HMD1105is susceptible to three-dimensional location determination. Consequently, in some embodiments, gestures and movements by any of the users1-3of the game controller842and1103and of the HMD1105are translated as inputs to a game in addition to or instead of conventional button or joystick commands Optionally, other wirelessly enabled peripheral devices, such as, e.g., the Playstation™ Portable device, etc., are used as a controller. In the case of the Playstation™ Portable device, additional game or control information, e.g., control instructions or number of lives, etc., is provided on a display screen of the device. In some embodiments, other alternative or supplementary control devices are used, such as, e.g., a dance mat (not shown), a light gun (not shown), a steering wheel and pedals (not shown), bespoke controllers, etc. Examples of bespoke controllers include a single or several large buttons for a rapid-response quiz game (also not shown).

The remote control1104is also operable to communicate wirelessly with the game console1100via the Bluetooth link1118. The remote control1104includes controls suitable for the operation of the Blu Ray™ Disk BD-ROM reader1140and for navigation of disk content.

The Blu Ray™ Disk BD-ROM reader1140is operable to read CD-ROMs compatible with the game console1100, in addition to conventional pre-recorded and recordable CDs, and so-called Super Audio CDs. The Blu Ray™ Disk BD-ROM reader1140is also operable to read digital video disk-ROMs (DVD-ROMs) compatible with the game console1100, in addition to conventional pre-recorded and recordable DVDs. The Blu Ray™ Disk BD-ROM reader1140is further operable to read BD-ROMs compatible with the game console1100, as well as conventional pre-recorded and recordable Blu-Ray Disks.

The game console1100is operable to supply audio and video, either generated or decoded via the Reality Synthesizer graphics unit1130, through audio connectors1150and video connectors1152to a display and sound output device1142, such as, e.g., a monitor or television set, etc., having a display screen1144and one or more loudspeakers1146, or to supply the audio and video via the Bluetooth® wireless link port1118to the display device of the HMD1105. The audio connectors1150, in various embodiments, include conventional analogue and digital outputs whilst the video connectors1152variously include component video, S-video, composite video, and one or more High Definition Multimedia Interface (HDMI) outputs. Consequently, video output may be in formats such as phase alternating line (PAL) or National Television System Committee (NTSC), or in 2220p, 1080i or 1080p high definition. Audio processing, e.g., generation, decoding, etc., is performed by the cell processor1108. An operating system of the game console1100supports Dolby® 5.1 surround sound, Dolby® Theatre Surround (DTS), and the decoding of 7.1 surround sound from Blu-Ray® disks.

In some embodiments, a video camera, e.g., the video camera1112, etc., comprises a single charge coupled device (CCD), an LED indicator, and hardware-based real-time data compression and encoding apparatus so that compressed video data is transmitted in an appropriate format such as an intra-image based motion picture expert group (MPEG) standard for decoding by the game console1100. An LED indicator of the video camera1112is arranged to illuminate in response to appropriate control data from the game console1100, for example, to signify adverse lighting conditions, etc. Some embodiments of the video camera1112variously connect to the game console1100via a USB, Bluetooth or Wi-Fi communication port. Various embodiments of a video camera include one or more associated microphones and also are capable of transmitting audio data. In several embodiments of a video camera, the CCD has a resolution suitable for high-definition video capture. In use, images captured by the video camera are incorporated within a game or interpreted as game control inputs. In another embodiment, a video camera is an infrared camera suitable for detecting infrared light.

In various embodiments, for successful data communication to occur with a peripheral device, such as, for example, a video camera or remote control via one of the communication ports of the game console1100, an appropriate piece of software, such as, a device driver, etc., is provided.

In some embodiments, the aforementioned system devices, including the game console1100, the HHC, and the HMD1105enable the HMD1105to display and capture video of an interactive session of a game. The system devices initiate an interactive session of a game, the interactive session defining interactivity between any of the users1-3and the game. The system devices further determine an initial position and orientation of the HHC and/or the HMD1105operated by any of the users1-3. The game console1100determines a current state of a game based on the interactivity between any of the users1-3and the game. The system devices track a position and orientation of the HHC and/or the HMD1105during an interactive session of any of the users1-3with a game. The system devices generate a spectator video stream of the interactive session based on a current state of a game and the tracked position and orientation of the HHC and/or the HMD1105. In some embodiments, the HHC renders the spectator video stream on a display screen of the HHC. In various embodiments, the HMD1105renders the spectator video stream on a display screen of the HMD1105.

With reference toFIG. 12, a diagram illustrating components of an HMD1202is shown. The HMD1202is an example of the HMD1105(FIG. 11). The HMD1202includes a processor1200for executing program instructions. A memory device1202is provided for storage purposes. Examples of the memory device1202include a volatile memory, a non-volatile memory, or a combination thereof. A display device1204is included which provides a visual interface, e.g., display of image frames generated from save data, etc., that any of the users1-3(FIG. 1) views. A battery1206is provided as a power source for the HMD1202. A motion detection module1208includes any of various kinds of motion sensitive hardware, such as a magnetometer1210, an accelerometer1212, and a gyroscope1214.

An accelerometer is a device for measuring acceleration and gravity induced reaction forces. Single and multiple axis models are available to detect magnitude and direction of the acceleration in different directions. The accelerometer is used to sense inclination, vibration, and shock. In one embodiment, three accelerometers1212are used to provide the direction of gravity, which gives an absolute reference for two angles, e.g., world-space pitch and world-space roll, etc.

A magnetometer measures a strength and a direction of a magnetic field in a vicinity of the HMD1202. In some embodiments, three magnetometers1210are used within the HMD1202, ensuring an absolute reference for the world-space yaw angle. In various embodiments, the magnetometer is designed to span the earth magnetic field, which is ±80 microtesla. Magnetometers are affected by metal, and provide a yaw measurement that is monotonic with actual yaw. In some embodiments, a magnetic field is warped due to metal in the real-world environment, which causes a warp in the yaw measurement. In various embodiments, this warp is calibrated using information from other sensors, e.g., the gyroscope1214, a camera1216, etc. In one embodiment, the accelerometer1212is used together with magnetometer1210to obtain the inclination and azimuth of the HMD1202.

A gyroscope is a device for measuring or maintaining orientation, based on the principles of angular momentum. In one embodiment, instead of the gyroscope1214, three gyroscopes provide information about movement across the respective axis (x, y and z) based on inertial sensing. The gyroscopes help in detecting fast rotations. However, the gyroscopes, in some embodiments, drift overtime without the existence of an absolute reference. This triggers resetting the gyroscopes periodically, which can be done using other available information, such as positional/orientation determination based on visual tracking of an object, accelerometer, magnetometer, etc.

The camera1216is provided for capturing images and image streams of a real-world environment, e.g., room, cabin, natural environment, etc., surrounding any of the users1-3. In various embodiments, more than one camera is included in the HMD1202, including a camera that is rear-facing, e.g., directed away from any of the users1-3when the user is viewing the display of the HMD1202, etc., and a camera that is front-facing, e.g., directed towards any of the users1-3when the user is viewing the display of the HMD1202, etc. Additionally, in several embodiments, a depth camera1218is included in the HMD1202for sensing depth information of objects in the real-world environment.

The HMD1202includes speakers1220for providing audio output. Also, a microphone1222is included, in some embodiments, for capturing audio from the real-world environment, including sounds from an ambient environment, and speech made by the any of the users1-3, etc. The HMD1202includes a tactile feedback module1224, e.g., a vibration device, etc., for providing tactile feedback to any of the users1-3. In one embodiment, the tactile feedback module1224is capable of causing movement and/or vibration of the HMD1202to provide tactile feedback to any of the users1-3.

LEDs1226are provided as visual indicators of statuses of the HMD1202. For example, an LED may indicate battery level, power on, etc. A card reader1228is provided to enable the HMD1202to read and write information to and from a memory card. A USB interface1230is included as one example of an interface for enabling connection of peripheral devices, or connection to other devices, such as other portable devices, computers, etc. In various embodiments of the HMD1202, any of various kinds of interfaces may be included to enable greater connectivity of the HMD1202.

A Wi-Fi module1232is included for enabling connection to the Internet via wireless networking technologies. Also, the HMD1202includes a Bluetooth module1234for enabling wireless connection to other devices. A communications link1236is also included, in some embodiments, for connection to other devices. In one embodiment, the communications link1236utilizes infrared transmission for wireless communication. In other embodiments, the communications link1236utilizes any of various wireless or wired transmission protocols for communication with other devices.

Input buttons/sensors1238are included to provide an input interface for any of the users1-3(FIG. 1). Any of various kinds of input interfaces are included, such as buttons, touchpad, joystick, trackball, etc. An ultra-sonic communication module1240is included, in various embodiments, in the HMD1202for facilitating communication with other devices via ultra-sonic technologies.

Bio-sensors1242are included to enable detection of physiological data from a user. In one embodiment, the bio-sensors1242include one or more dry electrodes for detecting bio-electric signals of the user through the user's skin.

The foregoing components of HMD1202have been described as merely exemplary components that may be included in HMD1202. In various embodiments, the HMD1202include or do not include some of the various aforementioned components.

FIG. 13illustrates an embodiment of an Information Service Provider (INSP) architecture. INSPs1302delivers a multitude of information services to the users1-3geographically dispersed and connected via a computer network1306, e.g., a LAN, a WAN, or a combination thereof, etc. The computer network602(FIG. 6B) is an example of the computer network1306. An example of the WAN includes the Internet and an example of the LAN includes an Intranet. The user1operates a client device1320-1, the user2operates another client device1320-2, and the user3operates yet another client device1320-3.

In some embodiments, each client device1320-1,1320-2, and1320-3includes a central processing unit (CPU), a display, and an input/output (I/O) interface. Examples of each client device1320-1,1320-2, and1320-3include a personal computer (PC), a mobile phone, a netbook, a tablet, a gaming system, a personal digital assistant (PDA), the game console1100and a display device, the HMD1202(FIG. 11), the game console1100and the HMD1202, a desktop computer, a laptop computer, a smart television, etc. In some embodiments, the INSP1302recognizes a type of a client device and adjusts a communication method employed.

In some embodiments, an INSP delivers one type of service, such as stock price updates, or a variety of services such as broadcast media, news, sports, gaming, etc. Additionally, the services offered by each INSP are dynamic, that is, services can be added or taken away at any point in time. Thus, an INSP providing a particular type of service to a particular individual can change over time. For example, the client device1320-1is served by an INSP in near proximity to the client device1320-1while the client device1320-1is in a home town of the user1, and client device1320-1is served by a different INSP when the user1travels to a different city. The home-town INSP will transfer requested information and data to the new INSP, such that the information “follows” the client device1320-1to the new city making the data closer to the client device1320-1and easier to access. In various embodiments, a master-server relationship is established between a master INSP, which manages the information for the client device1320-1, and a server INSP that interfaces directly with the client device1320-1under control from the master INSP. In some embodiments, data is transferred from one ISP to another ISP as the client device1320-1moves around the world to make the INSP in better position to service client device1320-1be the one that delivers these services.

The INSP1302includes an Application Service Provider (ASP)1308, which provides computer-based services to customers over the computer network1306. Software offered using an ASP model is also sometimes called on-demand software or software as a service (SaaS). A simple form of providing access to a computer-based service, e.g., customer relationship management, etc., is by using a standard protocol, e.g., a hypertext transfer protocol (HTTP), etc. The application software resides on a vendor's server and is accessed by each client device1320-1,1320-2, and1320-3through a web browser using a hypertext markup language (HTML), etc., by a special purpose client software provided by the vendor, and/or other remote interface, e.g., a thin client, etc.

Services delivered over a wide geographical area often use cloud computing. Cloud computing is a style of computing in which dynamically scalable and often virtualized resources are provided as a service over the computer network1306. The users1-3do not need to be an expert in the technology infrastructure in the “cloud” that supports them. Cloud computing is divided, in some embodiments, in different services, such as Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). Cloud computing services often provide common business applications online that are accessed from a web browser, while the software and data are stored on the servers. The term cloud is used as a metaphor for the computer network1306, e.g., using servers, storage and logic, etc., based on how the computer network1306is depicted in computer network diagrams and is an abstraction for the complex infrastructure it conceals.

Further, the INSP1302includes a game processing provider (GPP)1310, also sometime referred to herein as a game processing server, which is used by the client devices1320-1,1320-2, and1320-3to play single and multiplayer video games. Most video games played over the computer network1306operate via a connection to a game server. Typically, games use a dedicated server application that collects data from the client devices1320-1,1320-2, and1320-3and distributes it to other clients that are operated by other users. This is more efficient and effective than a peer-to-peer arrangement, but a separate server is used to host the server application. In some embodiments, the GPP 1310 establishes communication between the client devices1320-1,1320-2, and1320-3, which exchange information without further relying on the centralized GPP 1310.

Dedicated GPPs are servers which run independently of a client. Such servers are usually run on dedicated hardware located in data centers, providing more bandwidth and dedicated processing power. Dedicated servers are a method of hosting game servers for most PC-based multiplayer games. Massively multiplayer online games run on dedicated servers usually hosted by the software company that owns the game title, allowing them to control and update content.

A broadcast processing server (BPS)1312, sometimes referred to herein as a broadcast processing provider, distributes audio or video signals to an audience. Broadcasting to a very narrow range of audience is sometimes called narrowcasting. A final leg of broadcast distribution is how a signal gets to the client devices1320-1,1320-2, and1320-3, and the signal, in some embodiments, is distributed over the air as with a radio station or a television station to an antenna and receiver, or through a cable television or cable radio or “wireless cable” via the station. The computer network1306also brings, in various embodiments, either radio or television signals to the client devices1320-1,1320-2, and1320-3, especially with multicasting allowing the signals and bandwidth to be shared. Historically, broadcasts are delimited, in several embodiments, by a geographic region, e.g., national broadcasts, regional broadcasts, etc. However, with the proliferation of high-speed Internet, broadcasts are not defined by geographies as content can reach almost any country in the world.

A storage service provider (SSP)1314provides computer storage space and related management services. The SSP1314also offers periodic backup and archiving. By offering storage as a service, the client devices1320-1,1320-2, and1320-3use more storage compared to when storage is not used as a service. Another major advantage is that the SSP1314includes backup services and the client devices1320-1,1320-2, and1320-3will not lose data if their hard drives fail. Further, a plurality of SSPs, in some embodiments, have total or partial copies of the data received from the client devices1320-1,1320-2, and1320-3, allowing the client devices1320-1,1320-2, and1320-3to access data in an efficient way independently of where the client devices1320-1,1320-2, and1320-3are located or of types of the clients. For example, the user1accesses personal files via a home computer, as well as via a mobile phone while the user1is on the move.

A communications provider1316provides connectivity to the client devices1320-1,1320-2, and1320-3. One kind of the communications provider1316is an Internet service provider (ISP) which offers access to the computer network1306. The ISP connects the client devices1320-1,1320-2, and1320-3using a data transmission technology appropriate for delivering Internet Protocol datagrams, such as dial-up, digital subscriber line (DSL), cable modem, fiber, wireless or dedicated high-speed interconnects. The communications provider1316also provides, in some embodiments, messaging services, such as e-mail, instant messaging, and short message service (SMS) texting. Another type of a communications Provider is a network service provider (NSP), which sells bandwidth or network access by providing direct backbone access to the computer network1306. Examples of network service providers include telecommunications companies, data carriers, wireless communications providers, Internet service providers, cable television operators offering high-speed Internet access, etc.

A data exchange1318interconnects the several modules inside INSP1302and connects these modules to the client devices1320-1,1320-2, and1320-3via computer network1306. The data exchange1318covers, in various embodiments, a small area where all the modules of INSP1302are in close proximity, or covers a large geographic area when the different modules are geographically dispersed. For example, the data exchange1302includes a fast Gigabit Ethernet within a cabinet of a data center, or an intercontinental virtual LAN.

In some embodiments, communication between the server system and the client devices1320-1through1320-3may be facilitated using wireless technologies. Such technologies may include, for example, 5G wireless communication technologies. 5G is the fifth generation of cellular network technology. 5G networks are digital cellular networks, in which the service area covered by providers is divided into small geographical areas called cells. Analog signals representing sounds and images are digitized in the telephone, converted by an analog-to-digital converter and transmitted as a stream of bits. All the 5G wireless devices in a cell communicate by radio waves with a local antenna array and low power automated transceiver (transmitter and receiver) in the cell, over frequency channels assigned by the transceiver from a pool of frequencies that are reused in other cells. The local antennas are connected with the telephone network and the Internet by a high bandwidth optical fiber or wireless backhaul connection. As in other cell networks, a mobile device crossing from one cell to another is automatically transferred to the new cell. It should be understood that 5G networks are just an example type of communication network, and embodiments of the disclosure may utilize earlier generation wireless or wired communication, as well as later generation wired or wireless technologies that come after 5G.

It should be noted that in various embodiments, one or more features of some embodiments described herein are combined with one or more features of one or more of remaining embodiments described herein.

Embodiments described in the present disclosure may be practiced with various computer system configurations including hand-held devices, microprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers and the like. In one implementation, the embodiments described in the present disclosure are practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a wire-based or wireless network.

With the above embodiments in mind, it should be understood that, in one implementation, the embodiments described in the present disclosure employ various computer-implemented operations involving data stored in computer systems. These operations are those requiring physical manipulation of physical quantities. Any of the operations described herein that form part of the embodiments described in the present disclosure are useful machine operations. Some embodiments described in the present disclosure also relate to a device or an apparatus for performing these operations. The apparatus is specially constructed for the required purpose, or the apparatus is a general-purpose computer selectively activated or configured by a computer program stored in the computer. In particular, in one embodiment, various general-purpose machines are used with computer programs written in accordance with the teachings herein, or it may be more convenient to construct a more specialized apparatus to perform the required operations.

In an implementation, some embodiments described in the present disclosure are embodied as computer-readable code on a computer-readable medium. The computer-readable medium is any data storage device that stores data, which is thereafter read by a computer system. Examples of the computer-readable medium include a hard drive, a network-attached storage (NAS), a ROM, a RAM, a compact disc ROM (CD-ROM), a CD-recordable (CD-R), a CD-rewritable (CD-RW), a magnetic tape, an optical data storage device, a non-optical data storage device, etc. As an example, a computer-readable medium includes computer-readable tangible medium distributed over a network-coupled computer system so that the computer-readable code is stored and executed in a distributed fashion.

Moreover, although some of the above-described embodiments are described with respect to a gaming environment, in some embodiments, instead of a game, other environments, e.g., a video conferencing environment, etc., is used.

Although the method operations were described in a specific order, it should be understood that other housekeeping operations may be performed in between operations, or operations may be adjusted so that they occur at slightly different times, or may be distributed in a system which allows the occurrence of the processing operations at various intervals associated with the processing, as long as the processing of the overlay operations are performed in the desired way.

Although the foregoing embodiments described in the present disclosure have been described in some detail for purposes of clarity of understanding, it will be apparent that certain changes and modifications can be practiced within the scope of the appended claims. Accordingly, the present embodiments are to be considered as illustrative and not restrictive, and the embodiments are not to be limited to the details given herein, but may be modified within the scope and equivalents of the appended claims.

Claims

  1. A method for facilitating secret communication between a first player and a second player via a client device during a play of a game, comprising: receiving image data of the first player to identify one or more gestures during a game session of the game, wherein the one or more gestures are made by the first player towards the second player;receiving input data indicating one or more selections on a game controller during the game session, wherein the one or more selections are made by the second player;generating an inference of communication between the first player and the second player, wherein the inference of communication includes a determination that the one or more selections on the game controller are made in response to the one or more gestures made by the first player;generating additional inferences of communication between the first player and the second player;training an artificial intelligence model using the inference of communication and the additional inferences of communication between the first and second players;receiving additional image data to identify one or more additional gestures made by the first player for the second player during the play of the game, wherein the one or more additional gestures are made in a real-world environment including the first player and the second player;applying the one or more additional gestures to the artificial intelligence model for generating a recommendation for the second player, wherein the recommendation includes an action to be taken by the second player during the play of the game;and providing the recommendation to the client device used by the second player to notify the second player of the action.
  1. The method of claim 1, wherein the recommendation is generated without receiving additional input data in response to the one or more additional gestures.
  2. The method of claim 1, wherein said training the model comprises: extracting one or more features associated with the one or more gestures from the image data and one or more features from the input data;classifying the one or more features extracted from the image data to determine one or more actions performed by the first player;classifying the one or more features extracted from the input data to determine one or more actions performed by the second player;and associating the one or more actions determined as being performed by the first player and the one or more actions determined as being performed by the second player with a game context of the game.
  3. The method of claim 1, wherein said generating the additional inferences of communication between the first player and the second player comprises: training the artificial intelligence model based on additional time relationships and additional game contexts over a period of time, wherein the additional time relationships are associated with one or more additional selections made by the second player in response to one or more yet additional gestures made by the first player.
  4. The method of claim 1, wherein the game controller is a hand-held controller, wherein said generating the recommendation to the second player comprises: generating audio data to be sent to an audio device that is worn by the second player;or generating haptic feedback data to be sent to the hand-held controller that is held by the second player;or generating a message for display on a display device operated by the second player.
  5. The method of claim 1, wherein the image data represents an eye gaze of the first player and a head movement of the first player.
  6. A system for facilitating secret communication between a first player and a second player via a client device during a play of a game, comprising: an image camera configured to capture image data of the first player to identify one or more gestures during a game session of the game, wherein the one or more gestures are made by the first player towards the second player;a server coupled to the image camera via a computer network, wherein the server is configured to: receive input data indicating one or more selections on a game controller during the game session, wherein the one or more selections are made by the second player;generate an inference of communication between the first player and the second player, wherein the inference of communication includes a determination that the one or more selections on the game controller are made in response to the one or more gestures made by the first player;generate additional inferences of communication between the first player and the second player;train an artificial intelligence model using the inference of communication and the additional inferences of communication between the first and second players;receive additional image data to identify one or more additional gestures made by the first player for the second player during the play of the game, wherein the one or more additional gestures are made in a real-world environment including the first player and the second player;apply the one or more additional gestures to the artificial intelligence model to generate a recommendation for the second player, wherein the recommendation includes an action to be taken by the second player during the play of the game;and provide the recommendation to the client device used by the second player to notify the second player of the action.
  7. The system of claim 7, wherein the recommendation is generated without receiving additional input data in response to the one or more additional gestures.
  8. The system of claim 7, wherein to train the model, the server is configured to: extract one or more features associated with the one or more gestures from the image data and one or more features from the input data;classify the one or more features extracted from the image data to determine one or more actions performed by the first player;classify the one or more features extracted from the input data to determine one or more actions performed by the second player;and associate the one or more actions determined as being performed by the first player and the one or more actions determined as being performed by the second player with a game context of the game.
  9. The system of claim 7, wherein to generate the additional inferences of communication between the first user and the second user, the server is configured to: train the artificial intelligence model based on additional time relationships and additional game contexts, wherein the additional time relationships are associated with one or more additional selections made by the second player in response to additional one or more yet additional gestures made by the first player.
  10. The system of claim 7, wherein the game controller is a hand-held controller, wherein to generate the recommendation to the second player, the server is configured to: generate audio data to be sent to an audio device that is worn by the second player;or generate haptic feedback data to be sent to the hand-held controller that is held by the second player;or generate a message for display on a display device operated by the second player.
  11. The system of claim 7, wherein the image data represents an eye gaze of the first user and a head movement of the first user.
  12. The system of claim 7, wherein the recommendation is generated when the second user does not pay attention to the one or more additional gestures made by the first user.
  13. A computer system for facilitating secret communication between a first player and a second player via a client device during a play of a game, comprising: a processor configured to: receive image data of the first player to identify one or more gestures during a game session of the game, wherein the one or more gestures are made by the first player towards the second player;receive input data indicating one or more selections on a game controller during the game session, wherein the one or more selections are made by the second player;generate an inference of communication between the first player and the second player, wherein the inference of communication includes a determination that the one or more selections on the game controller are made in response to the one or more gestures made by the first player;generate additional inferences of communication between the first player and the second player;train an artificial intelligence model using the inference of communication and the additional inferences of communication between the first and second players;receive additional image data to identify one or more additional gestures made by the first player for the second player during the play of the game, wherein the one or more additional gestures are made in a real-world environment including the first player and the second player;apply the one or more additional gestures to the artificial intelligence model to generate a recommendation for the second player, wherein the recommendation includes an action to be taken by the second player during the play of the game;and provide the recommendation to the client device used by the second player to notify the second player of the action;and a memory device coupled to the processor.
  14. The computer system of claim 14, wherein the recommendation is generated without receiving additional input data in response to the one or more additional gestures.
  15. The computer system of claim 14, wherein to train the model, the processor is configured to: extract one or more features associated with the one or more gestures from the image data and one or more features from the input data;classify the one or more features extracted from the image data to determine one or more actions performed by the first player;classify the one or more features extracted from the input data to determine one or more actions performed by the second player;and associate the one or more actions determined as being performed by the first player and the one or more actions determined as being performed by the second player with a game context of the game.

Disclaimer: Data collected from the USPTO and may be malformed, incomplete, and/or otherwise inaccurate.