U.S. Pat. No. 9,682,313

Cloud-based Multi-player Gameplay Video Rendering and Encoding

AssigneeGoogle Inc.

Issue DateJuly 20, 2015

Illustrative Figure

Abstract

Generating in real-time multiple gameplay videos in a cloud computing network of a mobile game played on multiple mobile devices is disclosed. A cloud-based video system of the cloud computing network receives gameplay state information of the mobile game played on the multiple mobile devices, where the gameplay state information associated with a mobile device describes the states of the mobile game while the game is played on the mobile device. The video system generates a gameplay map comprising the gameplay observed by the multiple mobile devices. Responsive to a viewer or a virtual director selecting the gameplay associated with a mobile device, the video system generates a gameplay video of the mobile game associated with the mobile device based on encoded audio frames and video frames of the mobile game played on the mobile device.

Description

DETAILED DESCRIPTION The figures and the following description relate to embodiments by way of illustration only. It should be noted that from the following discussion, alternative embodiments of the structures and methods disclosed herein will be readily recognized as viable alternatives that may be employed without departing from the principles of what is claimed. Reference will now be made in detail to several embodiments, examples of which are illustrated in the accompanying figures. It is noted that wherever practicable similar or like reference numbers may be used in the figures and may indicate similar or like functionality. The figures depict embodiments of the disclosed system (or method) for purposes of illustration only. One skilled in the art will readily recognize from the following description that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles described herein. FIG. 1is a block diagram illustrating a system view of a video hosting service100having a cloud-based video system200for gameplay video processing. Multiple users/viewers use client110A-N to send video hosting requests to the video hosting service100, such as generating videos of mobile games played on a mobile device and uploading the videos to a video hosting website for sharing, and receive the requested services from the video hosting service100. To simplify description of embodiments, a video generated from a mobile game played on a mobile device is referred to as a “mobile gameplay video” or “gameplay video” from hereon. A gameplay video is independent of the game rendering, and may include images and content different from that which was rendered to the mobile device during the game. It can include other video/audio content, (e.g., video from other points of view than the players,) and it can include less than all of what was presented at the mobile ...

DETAILED DESCRIPTION

The figures and the following description relate to embodiments by way of illustration only. It should be noted that from the following discussion, alternative embodiments of the structures and methods disclosed herein will be readily recognized as viable alternatives that may be employed without departing from the principles of what is claimed.

Reference will now be made in detail to several embodiments, examples of which are illustrated in the accompanying figures. It is noted that wherever practicable similar or like reference numbers may be used in the figures and may indicate similar or like functionality. The figures depict embodiments of the disclosed system (or method) for purposes of illustration only. One skilled in the art will readily recognize from the following description that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles described herein.

FIG. 1is a block diagram illustrating a system view of a video hosting service100having a cloud-based video system200for gameplay video processing. Multiple users/viewers use client110A-N to send video hosting requests to the video hosting service100, such as generating videos of mobile games played on a mobile device and uploading the videos to a video hosting website for sharing, and receive the requested services from the video hosting service100. To simplify description of embodiments, a video generated from a mobile game played on a mobile device is referred to as a “mobile gameplay video” or “gameplay video” from hereon. A gameplay video is independent of the game rendering, and may include images and content different from that which was rendered to the mobile device during the game. It can include other video/audio content, (e.g., video from other points of view than the players,) and it can include less than all of what was presented at the mobile device. Thus, a gameplay video is independent such that the gameplay video's content can overlap (i.e., share some video, but not necessarily all). The video hosting service100communicates with one or more clients110via a network130. The video hosting service100receives the video hosting service requests (e.g., mobile gameplay video service) from clients110, processes the requests by the cloud-based video system200and uploads the processed gameplay videos to a video sharing website and/or returns the processed gameplay videos to the clients110.

Before describing the individual entities illustrated inFIG. 1, the following user case is to illustrate the operations of the cloud-based video system200of the video sharing service100. Assume that Bianca is a casual gamer who likes to play “Vegetable Ninja” video game on her mobile phone. Bianca's mobile phone includes a video game player, such as a GOOGLE PlayN open source game engine. Bianca enables live streaming of her video game before she starts playing or shortly afterward. Once Bianca enables live streaming of her video game, she receives a link to the live video of her game. Responsive to Bianca having the link, she can share it with her friends in a social network or send an email to her friends saying “Hey, watch me play this game now!” When her friends receive the link to the live video of her game, they can open a video player to watch the gameplay in near real-time even though they don't have the game installed on their devices. As Bianca is playing the game, the player of her mobile phone captures the game state of the game into a gameplay state information file as the game progresses. The game state includes Bianca's user inputs, such as clicks, touchscreen gestures, selections, as well as device motions (for games that make use of device orientation and acceleration data), as well as the state of game objects (e.g., points, characters, game level, etc.).

Turning to the individual entities illustrated onFIG. 1, each client110is used by a user, such as Bianca, to request and receive video hosting services. For example, a user uses a client110to send a request for sharing a gameplay video of a mobile game played on a mobile device. The client110can be any type of computer device, such as a mobile telephone, personal digital assistant, IP enabled video player, as well as a personal computer (e.g., desktop, notebook, laptop) computer. The client110typically includes a processor, a display device (or output to a display device), a local storage, such as a hard drive or flash memory device, to which the client110stores data used by the user in performing tasks, and a network interface for coupling to the system100via the network130.

A client110also has a video player120(e.g., the Flash™ player from Adobe Systems, Inc., or a proprietary one) for viewing a video stream, and adapted to play games. The video player120may be a standalone application, or a plug-in to another application such as a network browser. The player120may be implemented in hardware, or a combination of hardware and software. The player120is configured to play the gameplay video generated by the cloud-based video system200. Using the Bianca example described above, when Bianca receives the link to the live stream of the gameplay, she can share the link with her friends, who can watch her play the game on YouTube. All of these implementations are functionally equivalent in regards to the described embodiments.

The player120includes user interface controls (and corresponding application programming interfaces) for selecting a video feed, starting, stopping and rewinding a video. Also, the player120can include in its user interface a video display format selection configured to indicate which video display format (e.g., a two-dimensional (2D) video or a three-dimensional (3D) video). Other types of user interface controls (e.g., buttons, keyboard controls) can be used as well to control the playback and video format selection functionality of the player120.

In one embodiment, the client110implements an application programming interface (API) for gameplay video processing. The API can be categorized into two categories: data API and state capture API. The data API controls data feeds to the client110(e.g., sources of top rated mobile games, most viewed mobile games, etc.), user's playlists, subscriptions and user's comments, contacts feed. The state capture API controls the behavior of the player120of the client110during game play. The term “game” refers generally to a video game that is played by a user on his/her mobile device, and an instance of such play is referred to as a “gameplay session.” For example, the state capture API is invoked to capture various states of the gameplay session of the mobile game played on the client110, such as clicks, mouse movement, timing information of each state, audio/video frame information associated with each state and any other user input during the session of the game play.

In one embodiment, in response to the user of the client110ending a game session (e.g., quitting the game and selecting uploading mobile game button on his/her mobile device), the state capture API compiles the captured gameplay states into a gamestate information file in a JSON (JavaScript Object Notation) format and uploads the gamestate information to a storage (e.g., GOOGLE Cloud Storage) of a cloud-based video system for further processing.

The network130enables communications between the clients110and the video hosting service100. In one embodiment, the network130is the Internet or a mobile network that connects mobile devices to the Internet, and uses standardized internetworking communications technologies and protocols, known now or subsequently developed that enable the clients110to communicate with the video hosting service100. In another embodiment, the network130is a cloud computing network and includes one or more components of the video hosting service100.

The video hosting service100comprises a cloud-based video system200for processing gameplay videos from clients110in a cloud computing environment. Other embodiments of the video hosting service100may comprise other components, such as a video server to process user uploaded videos. The cloud-based video system200includes one or more computers executing modules for providing the functionality described herein. Depending on the embodiment, one or more of the functions of the cloud-based video system200can be provided in a cloud computing environment. As used herein, “cloud computing” refers to a style of computing in which dynamically scalable and often virtualized computing resources (e.g., processor, memory, storage, networking) are provided as a service over the network130.

The cloud-based video system200receives mobile gameplay state information associated with a gameplay session of a mobile game from the player120of the client110, renders the played mobile game based on the gameplay state information and encodes the played mobile game into a gameplay video in the cloud computing network. The cloud-based video system200also uploads the gameplay video to the video hosting service100for sharing. In one embodiment illustrated inFIG. 2, the cloud-based video system200includes a cloud storage202, a local storage204, a renderer220, an encoder230, an uploader240and a controller250.

The cloud storage202stores mobile gameplay state information, which is a record of all of the states of a video gameplay session as the game was played on the player120of the client110during given game session. The state information includes user actions (e.g., click, touch, etc.), as well as timing information of each state, audio/video frame information associated with each state and any other user input during the session of the game play. In another embodiment, the state information includes changes to the model representing the mobile game as a user playing the game.

In one embodiment, the cloud storage202is implemented using GOOGLE™ Cloud Storage API for storing and serving data in the cloud storage202. Data stored in the cloud storage202is arranged in a file system format. In one embodiment, the gameplay state information is captured at the client device during the gaming sessions and then transmitted to the cloud-based video system200. The mobile gameplay state information is stored as a gameplay state file in the cloud storage202. The gameplay state file also contains the identification of the mobile game, details of the particular gaming session (e.g., date, time, user information, version information), and video metadata (e.g., video title, tags, description the mobile game) associated with the mobile game. The cloud storage202also stores encoded audio/video data of the gameplay video for replaying and sharing. The local storage106stores gameplay rendering data, which describes reconstructed video and audio representation of the gameplay on the mobile device.

In response to the state capture API at the client110uploading the gameplay state information into the cloud-based video system200, the controller250stores the state information in the cloud storage202and notifies the renderer220. The renderer220retrieves the gameplay state information stored in the cloud storage202and replays the game session including rendering the audio representation (if the mobile game has sound) and video representation of the gameplay based on the gameplay state information.

In one embodiment, the renderer220uses native Java binary from a cross platform game engine (e.g., PlayN) and the gameplay state information to replay the user's game and reconstruct audio and video representation of the gameplay. The video representation of the gameplay represents the video content of the gameplay session in video frames. Similarly, the audio representation of the gameplay represents the audio content of the gameplay session in audio frames. In one embodiment, the renderer220includes one or more virtual machines (e.g., Linux virtual machines) and renders the audio and video content of the gameplay. The renderer220stores the rendered audio and video data in the local storage204for further processing by the encoder230.

The encoder230retrieves the rendered gameplay audio and video data from the local storage204and encodes the audio and video data into a gameplay video of the mobile game played on the client110. In one embodiment, the encoder230includes an audio encoder to encode the gameplay audio data and a video encoder to encode the gameplay video data. The encoder230“stitches” (i.e., puts together) the encoded audio and video data to generate the gameplay video of the mobile game using FFmpeg. The FFmpeg is an open-source software tools that contain libraries and programs for handling multimedia. Other audio and video rendering schemes can be used in other embodiments of the renderer220.

In one embodiment, the encoder230uses the same or different virtual machines used by the renderer220for the encoding process in the cloud computing environment. Using different virtual machines affords the encoder230with scalability and resiliency in the event of failure of one or more virtual machines. In one embodiment, the encoder230encodes the video frames of the gameplay video data in VP8 format and encodes the audio frames of the gameplay audio data in Vorbis format. VP8 encoding is an open source video compression format for high quality real-time video and Vorbis is an open source audio compression format intended for a variety of sample rates (e.g., 8 kHz and 192 kHz) and a range of channel representations (e.g., stereo, 5.1, or up to 255 discrete channels). Other audio and video encoding schemes can be used in other embodiments of the encoder230.

The encoder230generates a gameplay video of the mobile game played on the client110based on the encoded audio and video frames of the gameplay. In one embodiment, the encoder230generates the gameplay video in WebM format, which is an audio-video format designed to provide open source video compression format for use with HTLM5 videos. The gameplay video in WebM format file consists of VP8 video and Vorbis audio streams of the encoded audio and video data of the gameplay. The encoder230stores the gameplay video and video metadata associated with the gameplay video in the cloud storage202for uploading to a video sharing service by the uploader240. The video metadata associated with the gameplay identifies the user (i.e., gamer) of the mobile game and coding parameters (e.g., encoding format of the video/audio data, etc.).

In addition to generating a mobile gameplay video in a 2D format, another embodiment of the cloud-based videos system200generates a 3D mobile gameplay video. Even if the player did not play the mobile game in 3D, an additional reconstruction of the gameplay may be an attractive way of sharing one's game achievements. In this example, the renderer220analyzes the state file representing the mobile gameplay and determines an appropriate 3D video format for the mobile gameplay video. A 3D video format is typically a 2D video format with 3D specific metadata. The 3D specific metadata describes the manner of the video frames of a 2D video being packed (e.g., left-and-right, or top-and-down) and video format of the 2D video for a 3D video encoding and display. The renderer220stores the determined 3D video format in the local storage204for 3D gameplay video encoding.

The encoder230retrieves the 3D video format and the rendered audio/video data of the mobile gameplay for 3D gameplay video encoding. In one embodiment, the encoder230is a VP8 encoder used in WebM-3D encoding scheme. WebM-3D is a combination of a WebM container, VP8 video formation and StereoMode setting. An example of the WebM-3D specification, including the StereoMode setting, can be found at http://www.webmproject.org/code/sepcs/cotnrainter#webm_guidelines. The encoded 3D mobile gameplay video can be displayed with 3D visual effect according to the 3D metadata. The encoder230stores the 3D mobile gameplay video in the cloud storage202for uploading.

The uploader240uploads the generated gameplay video (in 2D or 3D format) of the mobile game to the video hosting service100. To upload the gameplay video, the uploader240retrieves the gameplay video and associated metadata from the cloud storage202. The uploader240uses the user identifier contained in the metadata to verify the user of the mobile game before uploading the gameplay video. The uploader240communicates with the video hosting service100for the verification, and receives an authorization token associated with the user identifier. Responsive to an invalid authorization token associated with the user identifier, the uploader240notifies the user of the client110for providing use information for re-verification.

With the metadata of the gameplay video and the authorization token, the uploader240fetches the gameplay video and communicates with the video sharing service100for the uploading. In one embodiment, the uploader240fetches and uploads the gameplay video in a sequence of video chunks, each of which contains a portion of the gameplay video. Responsive to the video sharing service100confirms receipt of the uploaded video portion, the uploader240continues the uploading until the entire file of the gameplay video is saved in the video hosting service100. The uploader240sends the user with a notification containing a web link to the uploaded video in response to finishing the uploading.

The controller250is to control the mobile game processing among the renderer220, the encoder230and the uploader240and storage in the cloud storage202and the local storage204. Efficiently processing complex mobile games requests often involves multiple renderers220, encoders230and/or uploaders240. The controller250is further configured to efficiently distribute the requests among the multiple renderers220, encoders230and uploader240. In one embodiment, the controller250is an application engine (e.g., GOOGLE™ App Engine) to control the gameplay video processing.

For example, the controller250functions as an URL listener to detect the mobile gameplay state information uploaded to the cloud-based video system200. Upon the mobile gameplay state information detection, the controller250creates a control entry in a database controlled by the controller250and notifies the renderer220for rendering.

The controller250monitors the rendering process of the renderer220. In response to multiple rendering tasks, the controller250queues the rendering tasks for a particular renderer220or distributes the rendering tasks among multiple renderers220. Upon finishing a rendering task, the controller250stores the rendered data in the local storage204and notifies the encoder230for encoding. In one embodiment, the controller250distributes the multiple rending tasks based on a load-balancing scheme among the multiple renderers220such that the multiple renderers220balance the rendering efficiency (e.g., throughput) and computer resources allocated to the renderers220.

The controller250similarly monitors and controls the encoding tasks performed by one or more encoders230. Upon finishing an encoding task, the controller250notifies the uploader240for uploading the mobile game video to the video sharing service100.

FIG. 3is a flow chart of processing a gameplay video by a cloud-based video system200described above. Initially, the video system200receives310gameplay state information of a mobile game from a mobile device (e.g., a mobile phone capable of playing video games). The gameplay state information is captured by the player of the mobile device and the game state information includes user information of the mobile device (e.g., user identifier) and user behavior of playing the mobile game (e.g., touch, click, state changes, etc.) The video system200stores312the gameplay state information in a clouds storage.

The cloud-based video system200includes a renderer to retrieve the gameplay state information from the cloud storages and renderers314the gameplay in a cloud computing environment by reconstructing audio (if available) and video representations of the mobile game at the corresponding gameplay states described by the gameplay state information. The renderer may include one or more virtual machines to perform the rendering and a controller of the video system200controls and distributes the gameplay video processing requests and processing load among the multiple virtual machines. The rendered audio and video data are stored in a local storage of the video system200.

The cloud-based video system200also includes an encoder for generating a mobile gameplay video in a cloud computing environment based on the rendered audio and video representations of the gameplay. For example, the encoder encodes316the audio representation of the gameplay in Vorbis format and encodes316the video representation of the gameplay in VP8 format. The encoder combines the encoded audio and video data to generate the mobile gameplay video (e.g., WebM or WebM-3D format) and stores318the encoded mobile gameplay video in the cloud storage of the video system200. An uploader of the video system200uploads320the encoded video to a video hosting service for sharing and notifies the user of the mobile device when the mobile gameplay video is uploaded.

FIG. 4is an example interaction between components of a cloud-based video system400for gameplay video processing of the Bianca user case described with reference toFIG. 1. Bianca uses a mobile device (e.g., mobile phone)402to play her favorite video game. The mobile player402records the gameplay state information while Bianca plays the video game. The mobile player402uploads420the gamestate file to a cloud storage404of the cloud-based video system. A renderer406of the video system fetches and renderers422the gameplay and saves424the rendered audio/video data in a local storage408of the video system. An encoder410of the video system fetches and encodes426the rendered audio/video data into a gameplay video. An uploader412of the video system fetches428the gameplay video from the cloud storage404and streams430the gameplay video to YOUTUBE™. The cloud video system400may comprise a controller (not shown) to monitor the video processing among the various modules of the video system400.

Described embodiments of gameplay video processing advantageously provide mobile game developers (and potentially console or web game developers) to offload the computationally expensive process of rendering and encoding of the gameplay to a cloud computing environment. For example, the embodiments of gameplay video processing can be applied to web-based gameplay processing, where the gameplay state file of a game run in the browser is created by a browser-resident game engine, sent by the browser to the cloud-based video system for rendering and encoding.

The cloud-based gameplay video processing offers efficient and scalable processing capability and enhanced user experience by allowing gamers to flexibly share their game videos with others. For example, a user can not only upload his/her gameplay video for sharing, but also have the gameplay video streamed in real time by the cloud-based video system. While a gamer plays a mobile game, the cloud-based video system constructs a live video stream based on gameplay state information and uses a video hosting service (e.g., YOUTUBE) to distribute the live video of the gameplay to viewers.

Another application offered by the cloud-based gameplay video processing is multi-platform and multi-player live streaming events processing. For example, comparing with traditional video streaming of a live event having multiple players, the cloud-based video system200re-generates the live event viewed from multiple camera angles, each of which represents a viewing perspective of the live event captured by a camera or mobile device. The cloud-based video system200provides different viewing points of the live event to viewers in a video hosting service (e.g., YOUTUBE). The users viewing the live event can select a particular viewing point of the live event or a virtual director of the cloud-based video system200selects an interesting viewing point of the live. The cloud-based video system200provides the rendering, encoding of the selected viewing point and presents the live event from the selected viewing point to the viewer.

FIG. 5is a block diagram illustrating a system view of a video hosting service100having a cloud-based video system for multiple players gameplay video processing. The video hosting service100communicates with multiple game players120and multiple viewers140via a network130. A game player120sends video hosting requests to the video hosting service100, and receives the requested services from the video hosting service100. The video hosting service100has a cloud-based video system200to process the video hosting requests received from the video hosting service100, such as generating gameplay videos selected by the viewers140or a director260.

For a mobile game simultaneously played by multiple game players120, the cloud-based video system200receives and stores the game state information from each game player. The cloud-based video system200generates a gameplay video for each game player120for the mobile game played on his/her mobile device. Each of the gameplay video of the mobile game corresponds to a viewing point of the mobile game observed on a game player's mobile device. The process of generating the gameplay video for a game player120is similar to the process described with reference to the description ofFIGS. 2-4above. Specifically, the cloud-based video system200receives the gameplay state information from a game player, which is captured by the mobile device of the game player, and the game state information includes user information of the mobile device (e.g., device identifier) and user behavior of playing the mobile game (e.g., touch, click, state changes, etc.)

Comparing with gameplay video generation of a mobile game of a single game player described with reference toFIG. 1andFIG. 2, the cloud-based video system200for multiple game players120includes an additional module, a director module260, that is configured to automatically select an interesting viewing point of the mobile game for a viewer140from the viewing points captured by the multiple mobile devices of the players. In one embodiment, the director260is implemented in a virtual machine of the cloud-based video system200. The cloud-based video system200receives gameplay state information from the multiple players120, and obtains common state information among the multiple players120. The director260automatically selects an interesting viewing point of the gameplay for gameplay video generation and presents the generated gameplay video to a viewer140.

In one embodiment, the cloud-based video system200continuously receives gameplay state information from the multiple players120in real time. In this scenario, the cloud-based video system200synchronizes the multiple gameplay state information based on common state information (e.g., a game scene captured by multiple players120). In another embodiment, the cloud-based video system200requests the client devices (e.g., mobile devices of the multiple players120) periodically send a predetermined amount of their gameplay state information to the director260. The length of period and amount of gameplay state information requested by the cloud-based video system200from the multiple players120are configurable design parameters, such as for every 100 milliseconds, each player120sending a fixed amount of gameplay state information. The cloud-based video system200can adjust the length of period and amount of gameplay state information based on a variety of factors, such as network bandwidth, processing speed and workloads of the director260.

The cloud-based video system200can collect gameplay state information of multi-player mobile games in most commonly used multi-player game designs, e.g., peer-to-peer gameplay and multi-player (more than two player) gameplay. In peer-to-peer gameplay case, the cloud-based video system200can request the two players to send their gameplay state information directly to the director260. In multi-player case, where a third-party game server is often used to collect gameplay state information from the multiple players, the director260obtains the gameplay state information of the multiple players from the game server.

From the collected gameplay state information from multiple players, the director260automatically selects an interesting viewing point captured by a mobile device of a game player based on a variety of factors. Factors include the status of each player (e.g., the player's score, health, inventory, activity level), the occurrence of specific events (e.g., completion of a scoring event such as football touchdown, baseball homerun, basketball slam dunk, opponent kill in a first person shooter). For example, for a zombie-killing mobile game, the director260can select the viewing point of a player who has killed the most zombies (i.e., a highest current score) at the time of selection.

Another factor for selecting point of view is based on order and time allocated to the game players. TakingFIG. 6as an example, the director260initially assigns a predetermined, equal amount of show times to the four game payers120A-120D and selects the viewing point of a player based on a round-robin scheme, e.g., displaying the viewing point of player120A for 5 seconds, and followed by the viewing point of player120B for 5 seconds, the viewing point of player120C and the viewing point of player102D, each for 5 seconds.

To compensate the selection based on the order and time allocated to the game players, the director260may use the performance of each game player, e.g., based on the player's score of performance. The director260adjusts the allocated show time to a player based on the player's performance. In one embodiment, the probability of a player to be selected by the director260is proportionally based with the player's performance score. For example, the director260grants a longer show time to a player who has better performance (i.e., a high performance score) than a player with worse performance. Other embodiments of the director260may use other factors to determine the selection.

In response to a viewer or the director260selecting the viewing point observed from the mobile device of a game player, the cloud-based video system200renderers the gameplay in a cloud computing environment by reconstructing audio (if available) and video representations of the mobile game at the corresponding gameplay states described by the gameplay state information associated with the game player. The cloud-based video system200generates a mobile gameplay based on the rendered audio and video representations of the gameplay.

To capture the simultaneous playing of the mobile game by multiple game players120, the cloud-based video system200further generates a virtual gameplay map that contains information identifying each generated gameplay video of the mobile game and its associated game player and/or mobile devices. The gameplay map is an interactive video that can be clicked on by a user. A gameplay map provides another perspective of the gameplay in a form of video. During the course of the gameplay, the gameplay map changes corresponding to the progress of the gameplay. In one embodiment, the director260of the cloud-based video system200generates the gameplay map based on the collected gameplay state information from the multiple players120. The gameplay map may further include thumbnail images, each of which represents a rendered gameplay video associated with a game player. The gameplay map may also include ordering information of the rendered gameplay videos for presentation to a viewer. An uploader of the cloud-based video system200streams the encoded gameplay videos to the video hosting service100for sharing and viewing by one or more viewers140. The video hosting service100or the director260may choose one of the encoded gameplay videos as a default gameplay video and displays the gameplay map in terms of multiple thumbnail images of the individual gameplay videos associated with multiple game players.

FIG. 6is an example of the process flow in real-time rendering of a mobile game played by multiple players for a viewer of the rendered gameplay video. In the example illustrated inFIG. 6, a video game602is played by four game players120A-120D on their mobile devices. For each game player120, the cloud-based video system200renders a gameplay video based on the gameplay state information from the game player, which describes the specific user actions (e.g., clicks, touches) and user/device information (e.g., user identifier). The gameplay state information associated with a game player represents a viewing point of the video game observed by the mobile device of the game player. For example, in response to the viewer selecting the gameplay of game player120A, the cloud-based video system200renders a gameplay video610A based on the gameplay state information captured by the mobile device of the player120A.

The cloud-based video system200also generates a virtual gameplay map604identifying the gameplay video and its associated game player/mobile device, e.g., gameplay video610A associated with player120A. The virtual gameplay map includes an encoded gameplay video, e.g.,610D, as the default gameplay image, and also includes a thumbnail image of each individual gameplay video610A-610D in the gameplay map. In response to the gameplay videos and the gameplay map being uploaded to a video hosting service (e.g., YOUTUBE), the video hosting service displays the default gameplay image610D and each individual gameplay video610A-610D in a user interface604to share with one or more viewers. A viewer may click a thumbnail image (i.e.,610A-610D) to get a specific viewpoint of the video game played on the mobile device of the player associated with the selected viewpoint.

In another example with reference toFIG. 6, the director260automatically selects a viewing point as the most interesting viewing point for a viewer from the four viewing points captured by the mobile devices of the game players. Comparing with a viewer selecting a viewing point, the director260has the advantage to access all the available viewing points of the mobile game and is able to select a viewing point that makes the viewing of the mobile game more interesting. Additionally, the director260may switch from a viewing point to another viewing point in response to a triggering event, such as a game player achieving a certain event (e.g., killing a zombie). The director260can further enhance the user experience by arranging rendered gameplay video associated with the game players in an order which allows the video hosting service100to present the most interesting viewing point to a viewer first.

UsingFIG. 6as an example, the director260automatically selects the viewing point represented by the gameplay video610D as the most interesting viewing point among the 4 available viewing points represented by the gameplay videos610A-610D. The director260instructs the video hosting service100to present the selected gameplay video610D to the viewer. The directors260also includes the ordering information in the gameplay map, which instructs the video hosting service100to presents the gameplay videos in a defined order (e.g.,610D first, followed by610A,610B and610C, then610D again). In response to a predetermined event of the game play captured by the mobile devices, video610C, the directors260update the ordering, e.g., switching the currently displayed gameplay video610D with the newly rendered gameplay video610C.

FIG. 7is a flow chart of a virtual director selecting a gameplay of a mobile game for presentation to a viewer of the mobile game. Initially, the virtual director (e.g., the director260of the cloud-based video system200) receives710the gameplay state information of the mobile game from multiple mobile devices associated with multiple game players. The director260selects712a viewing point of the gameplay of the mobile game as the most interesting viewing point and communicates the selection with other modules (e.g., render, encoder) of the video system200for rendering the gameplay video of the selected gameplay. Specially, the render220of the video system200renders714the gameplay by reconstructing audio (if available) and video representations of the mobile game at the corresponding gameplay state described by the gameplay state information associated with the selection. The encoder230of the video system200encodes716the audio/video representations of the gameplay into a gameplay video. The director260presents718or instructs the video hosting service100to present the generated gameplay video as the most interesting gameplay at the moment to a viewer. In response to a determination that a new interesting viewing point of the gameplay exists720, the director260selects the new gameplay and switches722to the gameplay video of the selected gameplay as the new most interesting gameplay for the viewer.

Allowing a viewer or a virtual director to select a viewing point of the gameplay, the cloud-based video system200generates the selected gameplay video in real time and displays it to the viewer. Empowered by the cloud-based gameplay video processing, users can have an enhanced experience of “who-is-where” in a game event.

Additional Configuration Considerations

Throughout this specification, plural instances may implement components, operations, or structures described as a single instance. Although individual operations of one or more methods are illustrated and described as separate operations, one or more of the individual operations may be performed concurrently, and nothing requires that the operations be performed in the order illustrated. Structures and functionality presented as separate components in example configurations may be implemented as a combined structure or component. Similarly, structures and functionality presented as a single component may be implemented as separate components. These and other variations, modifications, additions, and improvements fall within the scope of the subject matter herein.

Certain embodiments are described herein as including logic or a number of components, modules, or mechanisms, e.g., as shown and described inFIG. 2. Modules may constitute either software modules (e.g., code embodied on a machine-readable medium or in a transmission signal) or hardware modules. A hardware module is tangible unit capable of performing certain operations and may be configured or arranged in a certain manner. In example embodiments, one or more computer systems (e.g., a standalone, client or server computer system) or one or more hardware modules of a computer system (e.g., a processor or a group of processors) may be configured by software (e.g., an application or application portion) as a hardware module that operates to perform certain operations as described herein.

In various embodiments, a hardware module may be implemented mechanically or electronically. For example, a hardware module may comprise dedicated circuitry or logic that is permanently configured (e.g., as a special-purpose processor, such as a field programmable gate array (FPGA) or an application-specific integrated circuit (ASIC)) to perform certain operations. A hardware module may also comprise programmable logic or circuitry (e.g., as encompassed within a general-purpose processor or other programmable processor) that is temporarily configured by software to perform certain operations. It will be appreciated that the decision to implement a hardware module mechanically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software) may be driven by cost and time considerations.

Accordingly, the term “hardware module” should be understood to encompass a tangible entity, be that an entity that is physically constructed, permanently configured (e.g., hardwired), or temporarily configured (e.g., programmed) to operate in a certain manner or to perform certain operations described herein. As used herein, “hardware-implemented module” refers to a hardware module. Considering embodiments in which hardware modules are temporarily configured (e.g., programmed), each of the hardware modules need not be configured or instantiated at any one instance in time. For example, where the hardware modules comprise a general-purpose processor configured using software, the general-purpose processor may be configured as respective different hardware modules at different times. Software may accordingly configure a processor, for example, to constitute a particular hardware module at one instance of time and to constitute a different hardware module at a different instance of time.

Hardware modules can provide information to, and receive information from, other hardware modules. Accordingly, the described hardware modules may be regarded as being communicatively coupled. Where multiple of such hardware modules exist contemporaneously, communications may be achieved through signal transmission (e.g., over appropriate circuits and buses) that connect the hardware modules. In embodiments in which multiple hardware modules are configured or instantiated at different times, communications between such hardware modules may be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple hardware modules have access. For example, one hardware module may perform an operation and store the output of that operation in a memory device to which it is communicatively coupled. A further hardware module may then, at a later time, access the memory device to retrieve and process the stored output. Hardware modules may also initiate communications with input or output devices, and can operate on a resource (e.g., a collection of information).

The various operations of example methods described herein may be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors may constitute processor-implemented modules that operate to perform one or more operations or functions. The modules referred to herein may, in some example embodiments, comprise processor-implemented modules.

Similarly, the methods described herein may be at least partially processor-implemented. For example, at least some of the operations of a method may be performed by one or processors or processor-implemented hardware modules. The performance of certain of the operations may be distributed among the one or more processors, not only residing within a single machine, but deployed across a number of machines. In some example embodiments, the processor or processors may be located in a single location (e.g., within a home environment, an office environment or as a server farm), while in other embodiments the processors may be distributed across a number of locations.

The one or more processors may also operate to support performance of the relevant operations in a “cloud computing” environment or as a “software as a service” (SaaS). For example, at least some of the operations may be performed by a group of computers (as examples of machines including processors), these operations being accessible via a network (e.g., the Internet) and via one or more appropriate interfaces (e.g., application program interfaces (APIs).)

The performance of certain of the operations may be distributed among the one or more processors, not only residing within a single machine, but deployed across a number of machines. In some example embodiments, the one or more processors or processor-implemented modules may be located in a single geographic location (e.g., within a home environment, an office environment, or a server farm). In other example embodiments, the one or more processors or processor-implemented modules may be distributed across a number of geographic locations.

Some portions of this specification are presented in terms of algorithms or symbolic representations of operations on data stored as bits or binary digital signals within a machine memory (e.g., a computer memory). These algorithms or symbolic representations are examples of techniques used by those of ordinary skill in the data processing arts to convey the substance of their work to others skilled in the art. As used herein, an “algorithm” is a self-consistent sequence of operations or similar processing leading to a desired result. In this context, algorithms and operations involve physical manipulation of physical quantities. Typically, but not necessarily, such quantities may take the form of electrical, magnetic, or optical signals capable of being stored, accessed, transferred, combined, compared, or otherwise manipulated by a machine. It is convenient at times, principally for reasons of common usage, to refer to such signals using words such as “data,” “content,” “bits,” “values,” “elements,” “symbols,” “characters,” “terms,” “numbers,” “numerals,” or the like. These words, however, are merely convenient labels and are to be associated with appropriate physical quantities.

Unless specifically stated otherwise, discussions herein using words such as “processing,” “computing,” “calculating,” “determining,” “presenting,” “displaying,” or the like may refer to actions or processes of a machine (e.g., a computer) that manipulates or transforms data represented as physical (e.g., electronic, magnetic, or optical) quantities within one or more memories (e.g., volatile memory, non-volatile memory, or a combination thereof), registers, or other machine components that receive, store, transmit, or display information.

As used herein any reference to “one embodiment” or “an embodiment” means that a particular element, feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment.

Some embodiments may be described using the expression “coupled” and “connected” along with their derivatives. For example, some embodiments may be described using the term “coupled” to indicate that two or more elements are in direct physical or electrical contact. The term “coupled,” however, may also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other. The embodiments are not limited in this context.

As used herein, the terms “comprises,” “comprising,” “includes,” “including,” “has,” “having” or any other variation thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, article, or apparatus that comprises a list of elements is not necessarily limited to only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Further, unless expressly stated to the contrary, “or” refers to an inclusive or and not to an exclusive or. For example, a condition A or B is satisfied by any one of the following: A is true (or present) and B is false (or not present), A is false (or not present) and B is true (or present), and both A and B are true (or present).

In addition, use of the “a” or “an” are employed to describe elements and components of the embodiments herein. This is done merely for convenience and to give a general sense of the invention. This description should be read to include one or at least one and the singular also includes the plural unless it is obvious that it is meant otherwise.

Upon reading this disclosure, those of skill in the art will appreciate still additional alternative structural and functional designs for a system and a process for injecting 3D metadata into 3D videos at video streaming herein. Thus, while particular embodiments and applications have been illustrated and described, it is to be understood that the disclosed embodiments are not limited to the precise construction and components disclosed herein. Various modifications, changes and variations, which will be apparent to those skilled in the art, may be made in the arrangement, operation and details of the method and apparatus disclosed herein without departing from the spirit and scope defined in the appended claims.

Claims

  1. A method for generating a gameplay video in a cloud computing network, the method comprising: receiving gameplay state information from a computer device, the gameplay state information describing a plurality of states of a mobile game played on the computer device;rendering the mobile game played on the computer device to produce a rendered gameplay based on the received gameplay state information by reconstructing a video representation of the gameplay of the mobile game, the video representation of the gameplay representing video content of the gameplay of the mobile game, the rendered gameplay representing a viewing point of the mobile game observed from the computer device;and encoding the rendered gameplay to generate a gameplay video of the mobile game played on the computer device based on the rendered gameplay, wherein at least a portion of video content of the generated gameplay video is different from video content of the rendered gameplay.
  1. The method of claim 1 , wherein the gameplay state information includes changes to a model representing the mobile game played on the computer device.
  2. The method of claim 1 , wherein rendering the mobile game played on the computer device to produce the rendered gameplay further comprises: responsive to the mobile game having sound data, reconstructing an audio representation of the gameplay of the mobile game, the audio representation of the gameplay representing audio content of the gameplay of the mobile game.
  3. The method of claim 1 , wherein generating the gameplay video comprises: encoding audio frames of an audio representation of the gameplay in a two-dimensional format;encoding video frames of a video representation of the gameplay in a two-dimensional format;and combining encoded audio frames and encoded video frames to generate the gameplay video of the mobile game in a two-dimensional format.
  4. The method of claim 1 , wherein generating the gameplay video further comprises: encoding audio frames of an audio representation of the gameplay in a three-dimensional format;encoding video frames of a video representation of the gameplay in a three-dimensional format;and combining encoded audio frames and encoded video frames to generate the gameplay video of the mobile game in a three-dimensional format.
  5. The method of claim 1 , further comprising: responsive to the mobile game being played on one or more other computer devices: receiving gameplay state information from each of the other computer devices;selecting a viewing point captured by one of the computer devices based on the received gameplay state information;and generating a gameplay video of the gameplay associated with the computer device that captured the selected viewing point.
  6. The method of claim 6 , wherein selecting a viewing point captured by one of the computer devices comprises: selecting the viewing point based on status of players associated with the computer devices, wherein the status of a player associated with a computer device includes at least one of the player's game score, health, game inventory and game activity level.
  7. The method of claim 6 , wherein selecting a viewing point captured by one of the computer devices further comprises: selecting the viewing point based on occurrence of a pre-specified event associated with playing the mobile game.
  8. The method of claim 1 , further comprising: responsive to the mobile game being played on one or more other computer devices: generating a gameplay map of the mobile game played on the computer devices, the gameplay map including information identifying each gameplay video of the mobile game and its associated computer device and the gameplay map represented by a gameplay video selected from gameplay videos associated with the computer devices;and updating the gameplay map responsive to progress of the gameplay on the computer devices.
  9. A non-transitory computer-readable storage medium storing executable computer program instructions for generating a gameplay video in a cloud computing network, the computer program instructions comprising instructions for: receiving gameplay state information from a computer device, the gameplay state information describing a plurality of states of a mobile game played on the computer device;rendering the mobile game played on the computer device to produce a rendered gameplay based on the received gameplay state information by reconstructing a video representation of the gameplay of the mobile game, the video representation of the gameplay representing video content of the gameplay of the mobile game, the rendered gameplay representing a viewing point of the mobile game observed from the computer device;encoding the rendered gameplay to generate a gameplay video of the mobile game played on the computer device based on the rendered gameplay, wherein at least a portion of video content of the generated gameplay video is different from video content of the rendered gameplay.
  10. The computer-readable storage medium of claim 10 , wherein the gameplay state information includes changes to a model representing the mobile game played on the computer device.
  11. The computer-readable storage medium of claim 10 , wherein the computer program instructions for rendering the mobile game played on the computer device to produce the rendered gameplay further comprise computer program instructions for: responsive to the mobile game having sound data, reconstructing an audio representation of the gameplay of the mobile game, the audio representation of the gameplay representing audio content of the gameplay of the mobile game.
  12. The computer-readable storage medium of claim 10 , further comprising computer program instructions for: responsive to the mobile game being played on one or more other computer devices: receiving gameplay state information from each of the other computer devices;selecting a viewing point captured by one of the computer devices based on the received gameplay state information;and generating a gameplay video of the gameplay associated with the computer device that captured the selected viewing point.
  13. The computer-readable storage medium of claim 12 , wherein the computer program instructions for selecting a viewing point captured by one of the computer devices comprise computer program instructions for: selecting the viewing point based on status of players associated with the computer devices, wherein the status of a player associated with a computer device includes at least one of the player's game score, health, game inventory and game activity level.
  14. The computer-readable storage medium of claim 12 , wherein the computer program instructions for selecting a viewing point captured by one of the computer devices further comprise computer program instructions for: selecting the viewing point based on occurrence of a pre-specified event associated with playing the mobile game.
  15. The computer-readable storage medium of claim 10 , further comprising computer program instructions for: responsive to the mobile game being played on one or more other computer devices: generating a gameplay map of the mobile game played on the computer devices, the gameplay map including information identifying each gameplay video of the mobile game and its associated computer device and the gameplay map represented by a gameplay video selected from gameplay videos associated with the computer devices;and updating the gameplay map responsive to progress of the gameplay on the computer devices.
  16. A computer system for generating a gameplay video in a cloud computing network, the system comprising: a non-transitory computer-readable storage medium storing executable computer program instructions, the computer program instructions comprising instructions for: receiving gameplay state information from a computer device, the gameplay state information describing a plurality of states of a mobile game played on the computer device;rendering the mobile game played on the computer device to produce a rendered gameplay based on the received gameplay state information by reconstructing a video representation of the gameplay of the mobile game, the video representation of the gameplay representing video content of the gameplay of the mobile game, the rendered gameplay representing a viewing point of the mobile game observed from the computer device;encoding the rendered gameplay to generate a gameplay video of the mobile game played on the computer device based on the rendered gameplay, wherein at least a portion of video content of the generated gameplay video is different from video content of the rendered gameplay.
  17. The system of claim 17 , wherein the gameplay state information includes changes to a model representing the mobile game played on the computer device.
  18. The system of claim 17 , further comprising computer program instructions for: responsive to the mobile game being played on one or more other computer devices: receiving gameplay state information from each of the other computer devices;selecting a viewing point captured by one of the computer devices based on the received gameplay state information;and generating a gameplay video of the gameplay associated with the computer device that captured the selected viewing point.
  19. The system of claim 17 , further comprising computer program instructions for: responsive to the mobile game being played on one or more other computer devices: generating a gameplay map of the mobile game played on the computer devices, the gameplay map including information identifying each gameplay video of the mobile game and its associated computer device and the gameplay map represented by a gameplay video selected from gameplay videos associated with the computer devices;and updating the gameplay map responsive to progress of the gameplay on the computer devices.

Disclaimer: Data collected from the USPTO and may be malformed, incomplete, and/or otherwise inaccurate.