U.S. Pat. No. 10,888,778

AUGMENTED REALITY (AR) SYSTEM FOR PROVIDING AR IN VIDEO GAMES

AssigneeValve Corporation

Issue DateJanuary 9, 2020

Illustrative Figure

Abstract

An augmented reality (AR) system allows for providing AR in video games. The disclosed AR system allows for layering AR content on top of the built-in features of video games to provide a unique “in-game” AR experience for gamers. A remote computing system provides a central data warehouse for AR content and related data that may be accessed by select client machines to render augmented frames with AR content during execution of video games. The AR content may be spatially-relevant AR content that is rendered at appropriate locations within a game world. The AR content may be event specific such that the AR content is added in response to game-related events. The disclosed AR system allows for adding multiplayer aspects to otherwise single player games, and/or sharing of AR content in real-time to provide augmentative features such as spectating, mixing of game worlds, and/or teleportation through AR objects.

Description

DETAILED DESCRIPTION Described herein are, among other things, techniques, devices, and systems for providing augmented reality (AR) in video games. As mentioned, AR is traditionally regarded as a technology that is usable to enhance a user's experience with the real world (i.e., the physical environment of the user). The AR system disclosed herein enhances a user's experience, not with the real world, but with a game world of a video game. The disclosed AR system allows for layering AR content on top of the built-in features of video games. This in-game AR system is universal in the sense that it allows authors to provide a unique “in-game” AR experience for gamers by creating AR content for any video game, or multiple video games. In so doing, the disclosed AR system alleviates the burden on game developers to provide the same type of augmentative features to their own games. If left to their own devices, game developers would likely end up custom-building their own AR systems, which would likely result in AR systems that are game-specific and incompatible with other games released by other game developers. The disclosed in-game AR system is, by contrast, compatible with multiple different video games. The disclosed in-game AR system may include, among other things, a remote computing system that acts as a central data warehouse for AR content and related data. In some embodiments, the remote computing system maintains AR content in a spatial database that associates the AR content with various data (e.g., a game identifier (ID), spatial data relating to game world coordinates of a video game, event data specifying game-related events, etc.). Additionally, or alternatively, the AR content may be associated AR channels that act as filtering criteria for the AR content. The remote computing system may further provide an interface for ...

DETAILED DESCRIPTION

Described herein are, among other things, techniques, devices, and systems for providing augmented reality (AR) in video games. As mentioned, AR is traditionally regarded as a technology that is usable to enhance a user's experience with the real world (i.e., the physical environment of the user). The AR system disclosed herein enhances a user's experience, not with the real world, but with a game world of a video game. The disclosed AR system allows for layering AR content on top of the built-in features of video games. This in-game AR system is universal in the sense that it allows authors to provide a unique “in-game” AR experience for gamers by creating AR content for any video game, or multiple video games. In so doing, the disclosed AR system alleviates the burden on game developers to provide the same type of augmentative features to their own games. If left to their own devices, game developers would likely end up custom-building their own AR systems, which would likely result in AR systems that are game-specific and incompatible with other games released by other game developers. The disclosed in-game AR system is, by contrast, compatible with multiple different video games.

The disclosed in-game AR system may include, among other things, a remote computing system that acts as a central data warehouse for AR content and related data. In some embodiments, the remote computing system maintains AR content in a spatial database that associates the AR content with various data (e.g., a game identifier (ID), spatial data relating to game world coordinates of a video game, event data specifying game-related events, etc.). Additionally, or alternatively, the AR content may be associated AR channels that act as filtering criteria for the AR content.

The remote computing system may further provide an interface for authors to create new AR content, which is thereafter maintained by the remote computing system and made accessible to a select audience of gamers who would like to augment their video games with in-game AR experiences. This content-creation interface may support the creation of different types of AR content including, without limitation, informational messages, two-dimensional (2D) objects, three-dimensional (3D) objects, screen shots with 2D and/or 3D pixel data, video clips, and the like. AR content can be “static,” and therefore rendered at a fixed location within a game world. AR content can be “dynamic,” and therefore moving or animating within the game world. In some embodiments, AR content can even be interactive through the use of plugins that allow authors to create executable programs that respond to real-time video game data as input to the programs. In this manner, a player of the video game can not only experience AR content that has been added to a game world, but may, in some cases, interact with AR content, much like playing a secondary video game within the game world of the core video game.

In order to render the AR content within a game world, a client machine may obtain access to AR content, and may identify and render relevant AR content within a game world, as appropriate, while a video game is executing on the client machine. The client machine may access the AR content from any suitable storage location (e.g., from a remote computing system over a computer network, from local memory after downloading the AR content from the remote computing system). In an example process, a client machine may execute a video game that is configured to output video game content in a series of frames. During game execution, the client machine may augment any given frame of the series of frames with AR content by: (i) obtaining, from the video game, video game data about a current state of the video game, (ii) identifying AR content based at least in part on the video game data, (iii) generating an augmented frame that includes both the video game content and the identified AR content, and (iv) rendering the augmented frame on a display associated with the client machine. Notably, the AR content is not generated by the video game executing on the client machine, but is retrieved from a separate resource that maintains the AR content for retrieval in rendering augmented frames by layering the AR content “on top of” the video game content. Although it is often stated herein that AR content is layered “on top of” the video game content, this is not to be taken literally, as the AR content can be merged with video game content in any suitable manner such that some video game content (e.g., translucent graphics) is rendered “on top of” the AR content.

In some embodiments, the video game data—which is obtained from the executing video game and used to identify relevant AR content for augmenting a frame—may be spatial data that relates to game world coordinates within the game world of the video game. For example, the AR content may be identified based on its association with coordinates in the game world that relate, in some way, to a current location of a player-controlled character. In this manner, spatially-relevant AR content can be rendered with video game content in an augmented frame at a location within the game world of the video game. In some embodiments, the video game data can also be event data that relates to game-related events. For example, the AR content may be identified and rendered within the game world based on the occurrence of a game-related event (e.g., a shot fired from a gun, a game character entering/exiting a vehicle, etc.).

The disclosed AR system also allows for augmenting a single-player video game with various multiplayer aspects. This may be accomplished without having to overhaul the code for the single-player game in order to make it a multiplayer game. To enable such multiplayer aspects, client machines may exchange data over a computer network. For example, a first client machine executing a video game may be connected to a remote computing system so that the first client machine can receive, via the remote computing system, data emitted by a second client machine that is also executing the video game. The data emitted by the video game executing on the second client machine may be spatial data that specifies a current location of a second player-controlled character within a second instance of the game world that is being rendered on the second client machine. Upon receiving this spatial data over the network, the first client machine can identify and retrieve AR content (e.g., an AR avatar of the second player-controlled character), and the first client machine may render the retrieved AR content within the first instance of the game world that is being rendered on the first client machine, the AR content being rendered at a location within the game world that corresponds to the received spatial data. By way of example, this technique may allow for adding “speed running” to an otherwise single player game, whereby a first player using the first client machine sees an AR avatar of the second player's game character that is overlaid onto the video game content of the first player's video game.

In some embodiments, the disclosed AR system may construct (or reconstruct) a model of a portion of a game world from a 3D screenshot (i.e., an image with depth data). In this case, AR content may be a 3D screenshot, and a 3D model of a portion of a game world captured in the 3D screen shot can be constructed to allow a first gamer to look around and/or move around a “slice” of a game world that was captured by a second gamer. This 3D slice of the game world can be rendered in an augmented frame122on the first gamer's client machine while playing a video game.

In some embodiments, the disclosed in-game AR system may allow for real-time sharing of AR content over a computer network between client machines of different gamers. For instance, AR content can be rendered in a first video game as a viewport, or even as a portal, into the game world of another player's video game. This technique may use 3D screenshots to construct a 3D model of a portion of the game world exhibited in a particular 3D screenshot. This allows gamers to interact with each other through AR content that is rendered in each video game as a window into the other video game's virtual game world.

The techniques and systems described herein may allow one or more devices to conserve resources with respect to processing resources, memory resources, and/or networking resources. For example, sending data, in lieu of actual content (e.g., images and/or video files) over a computer network reduces network bandwidth consumption, at least as compared to live game streaming technology in use today that send a stream of content over a computer network at a high bandwidth consumption. As another example, selective download of AR content to client machines may reduce network bandwidth consumption and/or memory consumption by, and/or on, a local client machine that is configured to retrieve and render AR content during video game execution. Other examples are described throughout this disclosure.

FIG. 1shows a block diagram illustrating example components of a client machine100having an augmented reality (AR) component102configured to render augmented frames during execution of a video game104, the augmented frames including video game content and AR content. In general, the client machine100shown inFIG. 1may represent a computing device that can be utilized by a user106to execute programs and other software thereon. The user106of the client machine100, as shown inFIG. 1, is often referred to herein as a “player” in the context of the user106using the client machine100for the specific purpose of playing a video game104that is executing on the client machine100, or that is executing on a remote computing system and playable on the client machine100as a streamed video game104. Accordingly, the terms “user106,” “player106,” and/or “gamer106” may be used interchangeably herein to denote a user of the client machine100, wherein one of many uses of the client machine is to play video games.

The client machine100can be implemented as any suitable type of computing device configured to process and render graphics on an associated display, including, without limitation, a PC, a desktop computer, a laptop computer, a mobile phone (e.g., a smart phone), a tablet computer, a portable digital assistant (PDA), a wearable computer (e.g., virtual reality (VR) headset, augmented reality (AR) headset, smart glasses, etc.), an in-vehicle (e.g., in-car) computer, a television (smart television), a set-top-box (STB), a game console, and/or any similar computing device.

In the illustrated implementation, the client machine100includes, among other components, one or more processors108—such as a central processing unit(s) (CPU(s)) and a graphics processing unit(s) (GPU(s)), a display(s)110, memory112(or non-transitory computer-readable media112), and a communications interface(s)114. Although the example client machine100ofFIG. 1suggests that the client machine100includes an embedded display110, the client machine100may in fact omit a display, but may, instead, be coupled to a peripheral display. Thus, the display110is meant to represent an associated display110, whether embedded in the client machine100, or connected thereto (through wired or wireless protocols).

The memory112(or non-transitory computer-readable media112) may include volatile and nonvolatile memory, removable and non-removable media implemented in any method or technology for storage of information, such as computer-readable instructions, data structures, program modules, or other data. Such memory includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, RAID storage systems, or any other medium which can be used to store the desired information and which can be accessed by a computing device. The computer-readable media112may be implemented as computer-readable storage media (“CRSM”), which may be any available physical media accessible by the processor(s)108to execute instructions stored on the memory112. In one basic implementation, CRSM may include random access memory (“RAM”) and Flash memory. In other implementations, CRSM may include, but is not limited to, read-only memory (“ROM”), electrically erasable programmable read-only memory (“EEPROM”), or any other tangible medium which can be used to store the desired information and which can be accessed by the processor(s)108.

As will be described in more detail below, the client machine100may communicate with a remote computing system over a computer network via the communications interface(s)114. As such, the communications interface(s)114may employ any suitable communications protocol for communicating over a wired infrastructure (e.g., coaxial cable, fiber optic cable, etc.), a wireless infrastructure (e.g., radio frequencies (RF), cellular, satellite, etc.), and/or other connection technologies.

In some embodiments, a remote computing system, such as the remote computing system200shown inFIG. 2, acts as, or has access to, a platform to distribute (e.g., download) programs (and content) to client machines, such as the client machine100. Accordingly, the client machine100is shown inFIG. 1as having a video game client116installed in the memory112. The video game client116may represent an executable client application that is configured to launch and execute programs, such as video games (or video game programs). In other words, the video game client116may include gaming software that is usable to play video games on the client machine100. With the video game client116installed, a client machine100may then have the ability to receive (e.g., download, stream, etc.) video games from a remote system over a computer network, and execute the video games via the video game client116. Any type of content-distribution model can be utilized for this purpose, such as a direct purchase model where video games are individually purchasable for download and execution on a client machine100, a subscription-based model, a content-distribution model where video games are rented or leased for a period of time, and so on. Accordingly, the client machine100may include one or more video games, such as the video game104, within a video game library118. These video games may be retrieved and executed by loading the video game client116. In an example, a user may choose to play one of multiple video games they have purchased and downloaded to the video game library118by loading the video game client116and selecting a video game104to start execution of the video game104. The video game client116may allow users106to login to a video game service using credentials (e.g., a user account, password, etc.).

A remote computing system, such as the remote computing system200ofFIG. 2, may further act as, or have access to, a platform to distribute (e.g., stream, download, etc.) augmented reality (AR) content120to client machines, such as the client machine100. Accordingly, the client machine100is shown as having AR content120stored in the local memory112so that the AR content120is accessible from local memory112. In general, this AR content120may be received (e.g., downloaded, streamed, etc.) from the remote system200over a computer network, and may be used in the process of rendering augmented frames that include the AR content120added to the video game content that is output by the video game104itself. As such, the AR content120may be maintained remotely (e.g., at the remote computing system200) and accessed over a computer network.FIG. 1illustrates an example augmented frame122, which may be rendered as one of multiple frames during execution of the video game104. As used herein, a “frame” means an image frame that is one of many image frames in a series of image frames to render a live video game on a display. Accordingly, an augmented frame122is a composite frame that includes both video game content124and AR content120. Notably, the AR content120represents content that is not generated by the video game104itself. Thus, the AR content120represents supplemental computer-generated graphics that are added to the video game content after-the-fact. Accordingly, the video game content124inFIG. 1represents video game content for one of a series of frames output by the video game104itself while executing on the client machine100. In this sense, a “game world” of the video game104may be defined by a coordinate system, and the portion of the game world that is rendered in each frame of the series of frames may depend on various factors, including the current location of a player-controlled character126within the game world. The coordinate system of the game world may define coordinates that correspond to locations within the game world.FIG. 1shows an example of a first-person shooter video game104that allows the player106to control the game character's126movements within the game world. For instance, the player106can provide user input to the client machine100(e.g., via a game controller, a touchscreen, etc.) to move the player-controlled character126from one location to another location, wherein each location is specified in terms of specific coordinates that indicate where the player-controlled character126is located within the game world at any given moment.

As will be described in more detail below, the AR content120that is accessible to the client machine100may be stored in association with spatial data, which may specify particular coordinates of a game world of a particular video game104. In this manner, whenever the video game104renders a portion of the game world that includes coordinates associated with particular AR content120, the AR content120may be identified based on its association with those coordinates, and the identified AR content120may be used to generate an augmented frame122that includes the AR content120presented at a location within the game world that corresponds to those coordinates.

To illustrate how the client machine100may operate to provide in-game AR, consider a frame, of a series of frames, that is to be rendered on the display(s)110associated with the client machine100. To render the given frame, the AR component102executing on the client machine100may obtain, from the video game104, video game data128about a current state of the video game104, identify AR content120based at least in part on the video game data128(as shown by the arrow130inFIG. 1to access the AR content120, locally or remotely), generate an augmented frame122that includes video game content124output by the video game104and the AR content120that was identified based on the video game data128, and render the augmented frame122on the display(s)110.

The AR component102may be executed separately from the execution of the video game104so that, in the event that the video game104crashes, the AR component102does not crash, and vice versa. In this sense, the AR component102is decoupled from any particular video game104that is executing on the client machine100, which provides an ability to have an AR system including the AR component102that is compatible with, and transferrable across, multiple video games so that AR content120can be added to any video game104to enhance the user experience. For example, the AR component102may be run as a separate process from the video game104(e.g., a separate .exe to that of the video game104), and the AR component102and the video game104may communicate back and forth. The AR process can potentially communicate with multiple video games and/or multiple non-game applications at once. This AR process can also include, or be configured to load, plugins. These plugins may be executed within a security sandbox (or container). This decoupling of the AR component102from the video game104provides stability; the video game104will not crash the AR process and vice versa. Security is another benefit, because third party plugin code for rendering AR content102will not run in the same process as the video game104because it is sandboxed and kept separate, thereby mitigating any potential for cheating with the AR system. In some embodiments, a video game in the form of an “AR Viewer”, described in more detail below, may allow users106to spectate on AR content120out of context of video game content124. For example, an “AR Viewer” can access and render AR content120on a blank background or a 3D model representation of a game world.

In the example ofFIG. 1, the AR content120that was identified based on the video game data128includes first AR content120(1) and second AR content120(2). The first AR content120(1) is, by way of example, a screenshot (e.g., a 2D or a 3D image), and the second AR content120(2) is, by way of example, an informational message. The first AR content120(1) may be associated with first coordinates within the game world, and the second AR content120(2) may be associated with second coordinates within the game world. In this scenario, the video game data128obtained from the video game104may include spatial data that specifies a current location of the player-controlled character126within the game world, and possibly other spatial data, such as a current orientation of a virtual camera associated with the player-controlled character126. The camera orientation data may indicate the field of view as seen from the perspective of the player-controlled character126, and thus, when coupled with the current location of the player-controlled character126, a set of coordinates corresponding to a portion of the game world that is within the field of view of the player-controlled character126can be determined. In this manner, the AR content120that is to be rendered in the augmented frame122can be identified based at least in part on the spatial data that specifies the current location of the player-controlled character126, and possibly based on additional spatial data, such as camera orientation data, an index, and the like. These aspects of spatial data will be described in more detail below.

Thus, the AR component102may determine that the screenshot (the first AR content120(1)) is to be rendered at first coordinates that correspond to a first location within the game world, and that the informational message (the second AR content120(2)) is to be rendered at second coordinates that correspond to a second location within the game world. In this manner, the AR content120may be “spatially-relevant” AR content120in the sense that it is associated with particular coordinates within the game world. The player106can therefore navigate the player-controlled character126around the AR content120, which may, in some cases, remain fixed at a location within the game world.

As mentioned, the AR content120may additionally, or alternatively, be event-related AR content120in the sense that it is associated with particular events, as they occur in the video game104. In this scenario, the first AR content120(1) may be associated with a first game-related event, and the second AR content120(2) may be associated with the first game-related event or a second game related event, and the video game data128obtained from the video game104may include event data that indicates the occurrence of the game-related event(s).

In some embodiments, the AR component102may receive the video game data128from the video game104as part of a function call made by the video game104. In this scenario, a game developer of the video game104may implement an application programming interface (API) in the video game code to provide a rendering hook that makes this type of function call to the AR component102during individual frame loops during game execution to pass video game data128to the AR component102. For instance, a code library written by a service provider of the video game platform may be provided to a game developer for integration into their video game104, which allows for providing an AR-related process runs within the game process, and which is responsible for communicating with an external AR component102that runs in a separate process. The AR component102may be responsible for rendering an augmented frame122based on the video game data128and for requesting the AR content120that is to be rendered in the augmented frame122. In some embodiments, the timing of the function call during a frame loop is such that the function call is made after rendering opaque graphics in the video game content124, but before rendering translucent graphics in the video game content124so that the AR content120can be rendered between the two types of graphics. In some embodiments depth data from a depth buffer is used to merge video game content124and AR content120appropriately. The function call may provide the video game data128(e.g., spatial data, such as game world coordinates, a matrix transform of the camera orientation of the player-controlled character126, event data, etc.) to the AR component102so that the AR component102can retrieve relevant AR content120based on the video game data128.

In some embodiments, AR content120can be automatically “injected” into the video game by the AR component102as an overlay on an existing frame of video game content124, which does not rely on coordinating with the game developer to implement any additional AR-related code into their video game. This automatic injection technique may be accomplished using Simultaneous Localization and Mapping (SLAM) technology, as will be described in more detail below. In short, a SLAM process may be performed offline by the remote computing system200shown inFIG. 2, and may be used to reconstruct game world geometry (e.g., 3D models of game worlds) incrementally from many images. This backend process may be done by a service provider of the video game platform, by game developers, and/or by crowd-sourcing game world images from player client machines100. In this manner, SLAM can be used to automate the recognition of game world geometry depicted in a screenshot of video game content124, and the 3D models of the game world that are generated by the SLAM process can be used by client machines100to augment video game content124with AR content120in a way that presents the AR content120in the context of the game world geometry. This may also allow for adding AR content120to the video game content124of back catalogue games whose code is no longer updated by a game developer. In some embodiments, the video game106may be configured to explicitly request AR content120itself and render the AR content120itself, without reliance on the AR component102.

In some embodiments, the AR content120is overlaid on the video game content124in the process of rendering the augmented frame122. For example, the video game104may output pixel data (e.g., color values, depth values, etc.) that correspond to the graphics that are to be rendered on the display(s)110for the augmented frame122. The pixel data that is output by the video game104may indicate, for example, that opaque graphics are to be rendered first (e.g., at a greater depth value farther from the current location of the player-controlled character126), and that translucent graphics (e.g., particles, clouds, dust, etc.) are to be rendered after the opaque graphics (e.g., at a lesser depth value closer to the current location of the player-controlled character126). Accordingly, the AR content120may, in some cases, be rendered between opaque graphics and translucent graphics, such as by rendering the AR content120at a depth value between the depth values for the opaque and translucent graphics, respectively.

In some embodiments, the AR content120can be presented in a subtle manner within the augmented frame122, such as with an icon that does not take up much space in the game world, and when the user106focuses on the AR content120(e.g., by hovering a pointer over the icon, moving close to the icon, etc.) the pop-up may be presented asking the user106if he/she would like to see more. If the user106indicates, via a selection of a button, that he/she would like to see more, then the full version of the AR content120may be presented (e.g., by expanding the icon into a screenshot, an informational message, an object, or any other form of AR content120). In some embodiments, unsubscribed AR content120can be presented in this manner. AR channels are discussed in more detail below (e.g., SeeFIG. 3). In short, AR channels act as a filtering mechanism so that a user106can subscribe to one or more AR channels to see AR content that is relevant to those subscribed AR channels. However, in addition to receiving subscribed AR content from subscribed AR channels, a client machine100may receive AR content120to which the user106has not yet subscribed, and which is presented in a subtle manner within an augmented frame122to visually distinguish the unsubscribed AR content120from the subscribed AR content120. This unsubscribed AR content120may be transmitted to the client machine100based on current game world coordinates of a to-be-rendered portion of a game world of a video game104. Thus, the unsubscribed AR content120may be offered to the user106for subscription based on the location within the game world that the user106is currently experiencing.

FIG. 2is a diagram illustrating an example system202, including components of a remote computing system200, for creating and maintaining AR content120and related data in a spatial database204so that the AR content120is selectively provisioned to client machines for use in video games. In the illustrated implementation, the remote computing system200includes, among other components, one or more processors206, a communications interface(s)208, and memory210(or non-transitory computer-readable media210). The memory210(or non-transitory computer-readable media210) may include volatile and nonvolatile memory, removable and non-removable media implemented in any method or technology for storage of information, such as computer-readable instructions, data structures, program modules, or other data. Such memory includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, RAID storage systems, or any other medium which can be used to store the desired information and which can be accessed by a computing device. The computer-readable media210may be implemented as computer-readable storage media (“CRSM”), which may be any available physical media accessible by the processor(s)206to execute instructions stored on the memory210. In one basic implementation, CRSM may include random access memory (“RAM”) and Flash memory. In other implementations, CRSM may include, but is not limited to, read-only memory (“ROM”), electrically erasable programmable read-only memory (“EEPROM”), or any other tangible medium which can be used to store the desired information and which can be accessed by the processor(s)206. An augmented reality (AR) module212may represent instructions stored in the memory210that, when executed by the processor(s)206, cause the remote computing system200to perform the techniques and operations described herein. The memory210is also shown as maintaining a video game catalogue214, which may store a catalogue of video games, such as the video game104, for distribution to client machines, such as the client machine100, as described herein.

The communications interface(s)208may employ any suitable communications protocol for communicating over a wired infrastructure (e.g., coaxial cable, fiber optic cable, etc.), a wireless infrastructure (e.g., radio frequencies (RF), cellular, satellite, etc.), and/or other connection technologies. Authors, such as the authors216(1) and216(2) shown inFIG. 2, may access the remote computing system200over a computer network218using respective user computing devices220(1) and220(2). The computer network218may represent and/or include, without limitation, the Internet, other types of data and/or voice networks, a wired infrastructure (e.g., coaxial cable, fiber optic cable, etc.), a wireless infrastructure (e.g., radio frequencies (RF), cellular, satellite, etc.), and/or other connection technologies. The remote computing system200may, in some instances be part of a network-accessible computing platform that is maintained and accessible via the computer network218. Network-accessible computing platforms such as this may be referred to using terms such as “on-demand computing”, “software as a service (SaaS)”, “platform computing”, “network-accessible platform”, “cloud services”, “data centers”, and so forth. In general, the remote computing system200is configured to act as a central data warehouse for AR content120and related data.

The remote computing system200may be further configured to provide an interface (e.g., an application programming interface (API)) for user computing devices120to create new AR content120. As such, the remote computing system200may receive, via a content-creation interface (e.g., API) and from user computing devices120, instructions222to create AR content120.FIG. 2depicts a first author216(1) using a first user computing device220(1) to provide first instructions222(1) to create new AR content120, and a second author216(2) using a second user computing device220(2) to provide second instructions222(2) to create new AR content120. It is to be appreciated that the remote computing system220can support a community of such authors216who would like to create new AR content120so that it is maintained by the remote computing system200for access by client machines while playing video games.

In addition to an API that allows authors216to create AR content120outside of the execution of a video game104, new AR content120can be created on a client machine100during execution of a video game104via plugin logic. For example, the AR component102executing on a client machine100may provide video game data128to a plugin(s) that creates new AR content120(e.g., post-it notes, screenshots, etc.) during gameplay. This plugin-created AR content120may be transient in the sense that it exists for the lifetime of the current player's106game session and is not persisted after the session ends. Alternatively, the plugin-created AR content120may be uploaded to the spatial database204so that it is accessible in a later game session. In some embodiments, plugin-created AR content120is shared in real-time with other players106who are playing the same video game104or a different video game. In this scenario, the remote computing system200functions as a server that relays AR content120between client machines100during game sessions. Authors216may also use the content-creation API to specify access rights associated with new AR content, and/or content previously created by the author216. The content-creation API can also allow for adding AR content to pre-existing screenshots or video clips associated with one or more video games104.

FIG. 2shows a spatial database204that is used to store the AR content120created by authors216. The spatial database204may associate the AR content120with various types of data, including, without limitation, the types of data shown inFIG. 2. For example,FIG. 2illustrates how the spatial database204may include multiple records of AR content120(1),120(2), . . . ,120(N). Each record of AR content120may, for example, be associated with a game identifier (ID)224, which uniquely identifies a video game104within the video game catalogue214. In this manner, each record of AR content120can be tied to a particular video game. In some embodiments, an individual record of AR content120can be associated with multiple game IDs224of multiple different video games, or the spatial database204can maintain separate records to associate the same AR content120with multiple different game IDs224of different video games. In some embodiments, the game ID224may allow for specifying an aspect of a video game at any suitable level of granularity, such as a level of the video game, if the video game has multiple levels. By way of example, a record of AR content120may be associated with Level 3 of a particular video game, but not other levels.

Individual records of AR content120may also be associated with an AR channel ID230, which uniquely identifies an AR channel. AR channels are described in more detail below (e.g., with reference toFIG. 3). In short, AR channels may act as filtering criteria to filter out irrelevant AR content120and send relevant AR content120to a client machine of a user106based on that user's106AR channel subscriptions.

Individual records of AR content120may also be associated with game world coordinates226. The game world coordinates226may be considered to be spatial data227that specifies particular coordinates within a game world of a particular video game, the game world being defined by a coordinate system. In this manner, whenever the game world coordinates226associated with a record of AR content120are to be rendered in a frame in order to present a portion of the game world to the player106of a video game104, the AR content120associated with those game world coordinates226can be identified and used to generate an augmented frame122that includes the AR content120. In an illustrative example, an author216(1) may create AR content120, such as an informational message, for a given level of a video game that is to be presented in a doorway whenever that doorway is rendered in a frame of a series of frames. It is to be appreciated that multiple records of AR content120may be associated with the same game world coordinates226(e.g., the doorway on a given level of the video game), and some or all of the AR content120associated with those game world coordinates226are presentable for a given player106of the video game, depending on the access rights the player106has to access the AR content120. For example, multiple informational messages may be associated with a doorway on Level 3 of a particular video game, and some or all of these informational messages may be visible to a given player106as AR content120when the doorway is in the field of view of the player-controlled character126.

With reference again to the game ID224, the game ID224may also be usable to disambiguate between multiple instances of the game world coordinates226within the game world of a video game104. In other words, the game ID224can make the game world coordinates226unique in cases where the game world coordinates are ambiguous. Consider an example where an author216(2) wants to attach a hologram as AR content120to a car that is provided as video game content124. The AR component102executing on the client machine100may need to know which car, of potentially many of the same make and model, to which it is to attach the AR content120(e.g., the hologram). For mobile objects, like cars, that can move around the game world, the game world coordinates226associated with such mobile objects may be expressed relative to the mobile object, as opposed to being expressed relative to a part of the game world environment outside of the mobile object. In this case, the game world coordinates226associated with a mobile object may not be enough to fully disambiguate the part of the game world to which the AR content120(e.g., a hologram) is to be attached, and the game ID224is therefore usable to fully disambiguate between multiple instances of game world coordinates226.

As another example of how the game ID224can be used, consider a virtual hotel that appears in multiple different locations around the game world of a video game104. While the video game104may express the game world coordinates226for the individual hotels relative to the hotels themselves (e.g., as if the hotel is a miniature game world in and of itself), each instance of the hotel may be uniquely identified by a different game ID224. In general, it is to be appreciated that game world coordinates226may not be truly analogous to real-world coordinates due to various aspects of video games that are not shared by the real world (e.g., portals connecting disparate locations, pre-built rooms that are stitched together in a different order each time a video game is loaded (each session), etc.). For these and other reasons, the game ID224may be helpful to disambiguate between multiple instances of the same game world coordinates226.

In an illustrative example, a record of AR content120may correspond to a screenshot (e.g., the first AR content120(1) shown inFIG. 1) of a portion of a game world captured by a player-controlled character. When the screenshot was captured, it may have been associated with game world coordinates226and game ID224, as well as a camera orientation at the time the screenshot was captured. This data can be uploaded with the associated AR content120to create a new record in the spatial database204. Thus, during gameplay, when the video game104provides video game data128in the form of spatial data that specifies current game world coordinates226and a current game ID224associated with a player-controlled character126, a screenshot associated with that spatial data can be rendered as AR content120on top of the video game content124for that frame, allowing a first gamer106to see the same snapshot of a game world that was seen by a second gamer106. In some embodiments, the actual screenshot is not displayed unless and until the current player's camera orientation matches the camera orientation associated with a screenshot, and, otherwise, when the camera orientations do not match, these screenshots may be presented in the game world as “floating” images, much like the example first AR content120(1) shown inFIG. 1. In this manner, if a plurality of screenshots were captured in the same location of a game world and uploaded to the remote computing system200as AR content120, a given player106may see a cluster of floating images that are viewable whenever the player106aligns his/her player-controlled character126with the camera orientations associated with those screenshots.

Individual records of AR content120may also be associated with a state228. To illustrate how the state228can be used, consider a video game that presents a game world that dynamically changes between different states over time, such as when particular events occur that alter what is happening in the game. In an illustrative example, the game world may be presented in a first state before beating a boss, and in a second state after beating the boss. In this sense, the individual records of AR content120can be associated with these different game world states by virtue of the state228. That is, first AR content120associated with particular game world coordinates226may be presented in a first state of the game world by its association with a first state228, and when the state of the game world changes, the first AR content120may be removed, and second AR content associated with the same game world coordinates226may be presented in the second state of the game world by its association with a second state228, and so on. In an illustrative example, when a player106starts a boss battle, the player106may see first AR content120in the form of informational messages that wish the player “good luck” in battling the boss, and then, when the player106beats the boss, the player106may see second AR content120in the form of informational messages that congratulate the player for beating the boss.

As yet another example of how the state228can be used, consider a video game that is playable in different modes (e.g., solo mode where every player fends for themselves, duo mode where players play in pairs, squad mode where players play in larger groups, etc.). These modes can be played independently, but within the same game world of the video game. Thus, a record of AR content120can be associated with a state228that corresponds to a particular game mode. In general, the state228may be anything that is used to filter on the context (not necessarily spatial context) in which AR content120is to be rendered in an augmented frame122.

Individual records of AR content120may also be associated with pixel data232. The pixel data232may be particularly associated with AR content120in the form of screenshots that were captured by players106of a video game. For example, the pixel data232may include a 2D array of per-pixel values (e.g., color values) to reconstruct a 2D screenshot of a game world. In some embodiments, the pixel data232includes per pixel depth values that provide a sense of depth to the scene. Pixel data232that includes 3D information pertaining to a scene, can be used by the AR component102of a client machine100to construct a 3D model of a game world.

The example types of data shown inFIG. 2as being included in records of AR content120of the spatial database204are merely examples, and there may be other types of data associated with particular records of AR content120. For example, access rights may be associated with individual records of AR content120to indicate particular users or groups of users that are to have visibility to the AR content120while playing a video game. For example, AR content120can be associated with tags that specify whether the AR content120is visible to the general public, to friends of the author216who created the AR content120, or to other specified users or user groups. In some examples, AR content120can be associated with user interests, spoken languages (e.g., English, Japanese, Spanish, etc.), geographic locations, times of day, and the like. These types of data may act as filtering criteria to allow for sending AR content120to a requesting client machine100whenever one or more criteria are met. These types of data may additionally, or alternatively act as rendering criteria to determine whether to render the AR content120(e.g., render AR content120: if the current time corresponds to a particular time of day (e.g., within a particular time range), if the user106is presently located at a particular geographic location (e.g., within a particular geographic area/region), if the user106speaks a particular language, if the user106is interested in particular topics (e.g., as indicated in a user profile with user interests specified therein), etc.).

Various different types of AR content120may be created by authors216through the content-creation interface (e.g., an application programming interface (API)) that is provided by the remote computing system200. Examples types of AR content120include, without limitation, informational messages (e.g., messages posted by gamers106), virtual objects (e.g., shapes, avatars, shooting targets, etc.)—including 2D and/or 3D objects, screenshots captured by players while playing a video game—including 2D and/or 3D screenshots, video clips, interactive objects (e.g., game characters or other virtual objects or graphics that move within the game world), etc.

To enable the creation of AR content120(e.g., AR content120that is static, dynamic, or otherwise interactive), the remote computing system200may provide an API for authors216to write code (e.g., an executable program, such as a plugin, which may be implemented as a dynamic-link library (DLL), Javascript file, .exe, etc.) that is stored in a record of AR content120within the spatial database204. In this scenario, instead of retrieving already-created AR content120, video game data128about a current state of a video game104can be provided as input to the executable program of a record of AR content120, and the executable program may generate and output AR content120based on the program's processing of the video game data128. In this sense, “AR content” that is stored in a record of the spatial database204may, in some embodiments, include an “executable program” that is configured to generate AR content based on video game data128that is input to the executable program. In some embodiments, the AR component102executing on a client machine100may create a security sandbox, load one or more executable programs or plugins (e.g., DLLs) that correspond to an AR channel(s) to which the user106of the client machine100is subscribed, and provide video game data128to the plugins to have the plugins run their respective logic and return AR content120. For example, there could be a folder of DLLs, each DLL representing a different plugin. When the user106subscribes to an AR channel(s), the AR component102may load the corresponding DLL(s) within a security sandbox, and then, for each frame that is to be rendered, the AR component102may provide video game data128as input to the corresponding DLL(s) that have been loaded, and may receive, as output from the DLL(s), AR content120that is to be rendered in the frame as an augmented frame122. In an illustrative example, plugins can be created by authors216to allow for adding animated objects (e.g., game characters) to the game world of the video game104as an overlay of AR content120. Using such a plugin layer, an author216may create a secondary game that runs separately with respect to the base (or core) video game104and is presented as an overlay on the video game content of the video game104. In this sense, the video game104does not need to know, or care, about the AR content120that is rendered on top of the video game content124, yet the interactive AR content120may nevertheless be dependent upon the video game data128about the current state of the video game104so that the interactive AR content120can be presented in an appropriate context within the game world (e.g., at an appropriate location and in a sensible manner given the geometry of the game world, at an appropriate time, etc.). In this manner, a player106can interact with AR content120by providing user input to control a player-controlled character126. AR plugins can be executed locally on a client machine100or remotely at the remote computing system200. In the latter case, AR content120can be received over the network218in real-time by client machines100. There may be a plurality of executable programs (AR plugins) that are selectable by users106for download from the remote computing system200, individual plugins generating AR content120for a specific purpose.

In some embodiments, the content-creation interface (e.g., API) provided by the remote computing system200may allow an author216to create an executable program (e.g., plugin) that is configured to receive, as input, video game data128relating to the current scene (e.g., a 3D screenshot) of a game world that is being rendered on the screen of the client machine100, and the executable program may output the AR content. In this manner, the interactive AR content120can be presented in context of the game world that is being presented on the screen of the client machine100. For instance, an author216can use a plugin to write an executable program that causes an AR game character to run around the game world of a video game, like an enemy that the player106can try to shoot, capture, or otherwise interact with. The 3D screenshot data may allow for adding such interactive content120in a realistic way, such as by the character running around a wall in the game world, rather than running through the wall. For video games104that have similar player locomotion behaviors, similar-sized worlds, and/or similar game logic, an author216can create an executable program that is compatible with, and functional across, multiple video games. In this manner, the AR system, including the AR module212, can foster a culture of secondary game development where authors216enjoy using plugin-creation APIs to create secondary AR-based games that run “on-top-of” multiple different video games, especially those with similar game worlds and player locomotion behaviors. In this sense, the authors216that use the AR system disclosed herein may, in fact, be game developers that are in the business of developing secondary AR-based games. For instance, plugins can be used to create AR game sessions, which use game state from multiple game instances in order to generate AR content120that may then be shared across multiple client machines100. Users106of those client machines100may be able to participate in these AR game sessions without needing to execute the same video game—some AR games could be designed to allow each user to be in a different game or within a video game in the form of an AR Viewer. User interactions within an “AR game session” can be mediated by the network218(e.g., the AR platform enables users to interact with each other even if there is no support in a particular video game(s) for users to interact over the network218).

Game developers may participate in this AR system by adding minor features to their game code to support the use of executable programs that provide interactive AR content120, as described herein. For example, video games104can be developed by game developers whereby the video game is configured to emit data to, and receive data from, the AR component102executing on a given client machine100for purposes of adding interactive AR content120as an additional layer on top of the video game content124. For instance, a game developer may add one or more lines of code to their video game104that emits data whenever a bullet is shot in the game, the data specifying a vector (or ray) within the game world that provides directionality and possibly magnitude (e.g., velocity) pertaining to the bullet that was fired by a player-controlled character126in the video game104. The AR component102may receive this emitted data and provide it as input to an executable program for a particular record of AR content120associated with event data that corresponds to the firing of a bullet. The executable program may respond to such an event by generating AR-based shooting targets that are positioned within the game world, and which the player106can try to hit using a weapon (e.g., a gun, a knife, etc.) of player-controlled character126. The executable program may continue to receive data emitted by the video game104which informs the executable program as to whether an AR-based shooting target has been hit, and the executable program can output an exploding target so that the player106can interact with the AR-based shooting targets by shooting them and causing them to explode. This might be useful during a warm-up phase of a video game104for a player106to practice their shooting instead of just running aimlessly around a game world. In this sense, game developers can provide continued support for such AR-based interactivity in their games by updating their video games104to emit and/or receive particular data that is used by executable programs written by authors216using a plugin layer.

In some embodiments, the content-creation interface (e.g., API) provided by the remote computing system200may allow authors216to write executable programs (e.g., plugins) that provide data to the video game104as an additional layer of interactivity with the video game104. For example, an API can be used to write a program (e.g., a plugin) that sends data to the video game104during its execution in order to control when the player-controlled character126gains or loses health, or to control when and/or where a particular in-game enemy appears. A game developer may have to write game code that supports this kind of two-way interaction with the AR component102executing on a client machine100, and the game developer may determine when such a feature is enabled (e.g., by disabling the interactive AR content120during an online competitive match to prevent cheating, but allowing the interactive AR content120during single-player mode to provide a more dramatic and interesting AR experience on top of the base video game104. In some embodiments, different AR plugins (executable programs) written by different authors216may interact with each other by passing data back and forth at the management of the AR component102during video game execution. For example, authors216may create “utility” plugins that provide “quality of life improvements” to player-controlled characters126(e.g., an AR plugin to visualize useful information that is omitted from the video game's104built-in user interface).

In some embodiments, the remote computing system200may maintain a repository of 3D models of game worlds234. As mentioned, a SLAM process may be performed offline by the remote computing system200in order to reconstruct game world geometry and store the game world geometry in 3D models of game worlds234. The 3D models may be of a portion of a game world such that multiple 3D models may constitute an entire game world. SLAM is a technology that uses computational algorithms to, among other things, construct a model (or map) based on recognized shapes, points, lines, etc. exhibited in an unknown topographical environment based on image data that captures at least a portion of that environment. For example, a 3D model may represent an aggregated region of a game world, reconstructed via SLAM from many game frames (e.g., reconstructed from dense temporally contiguous video streams (of color data, with or without depth buffer data)). This reconstruction can be done as a separate, backend process so that the 3D models234are available a priori before AR plugins begins to execute on client machines100. For a given video game, the 3D reconstruction process may be performed by a service provider of the video game platform, by the game developer of the video game104, or by crowd-sourcing video game screenshots from player client machines100. User consent may be obtained before obtaining such data from client machines100of users106. Furthermore, users106may choose to participate in the process by voluntarily exploring game worlds and uploading screenshot data to the remote computing system200for purposes of constructing the 3D models234.

The SLAM-created 3D models234may be retrieved by client machines100for purposes of identifying a game world from an image of video game content124and determining a camera pose for purposes of adding AR content120in-game. In some embodiments, the 3D models234may be obtained by client machines100in order to automatically inject AR content120into unmodified video games and/or into pre-existing screenshots and/or videos of video games. This may enable a “streaming” use case where a video game104outputs a video stream without AR content120, the AR system (e.g., the AR component102and/or the AR module212) overlays AR content120on top of the video stream (e.g., by obtaining sufficient information from the video game, such as per-frame camera poses).

FIG. 3is a diagram illustrating an example system300whereby a client machine100can subscribe to AR channels that determine the AR content120that is received by the client machine100. In general, the client machine100, which may be the same as the client machine100described with reference toFIG. 1, may, prior to execution of a video game104and/or during execution of a video game104, receive AR content120from the remote computing system200over the computer network218. This may involve real-time streaming of AR content120during the execution of a video game104, or, to reduce latency and/or network bandwidth consumption, the AR content120can be downloaded from the remote computing system200and accessed from local memory of the client machine100when the AR content120is to be rendered as part of an augmented frame122. In some embodiments, whenever a user106starts a video game104on the client machine100, the video game client116sends, over the computer network218to the remote computing system200, a request that includes an identifier (e.g., the game ID224) of the video game104, and the remote computing system200uses the game ID224to lookup the records of AR content120within the spatial database204that are associated with the game ID224. Thus, among the available AR content120maintained in the spatial database204, the remote computing system200may identify a subset of the available AR content120that is associated with the game ID224it received from the client machine100in the request. This identified AR content120may be further filtered based on one or more filtering criteria, resulting in a filtered subset of AR content120that is ultimately sent to the client machine100, the filtered subset of AR content120being the AR content120associated with the game ID224that also satisfies one or more filtering criteria. This filtering criteria may include one or more criterion that are satisfied if the user106who sent the request has access rights to the AR content120. For example, if a record of AR content120visible exclusively to a particular group of users, that record of AR content120is sent to the client machine100if the logged-in user account that sent the request is included in the particular group of users. Otherwise, if a user106who is not included in that particular group of users sends a request for AR content120, that record of AR content120may not be sent to the client machine100, seeing as how the user106may not have access rights to that AR content120.

Another example filtering criterion illustrated inFIG. 3is whether the AR content120is associated with a channel that the user106of the client machine100has subscribed toFIG. 3shows a user interface302that the video game client116may display on the client machine100. The user interface302may include a list of menu items304for selection by the user to navigate to different aspects of a video game service provided by the remote computing system200. For example a “Store” menu item304may allow the user106of the client machine100to browse content, such as video games within the video game catalogue214. A “Library” menu item304may allow the user106to browse a library of content, such as the video game library118accessible to the client machine100as a result of the user106having acquired (e.g., purchased, rented, leased, etc.) video games. A “News” menu item304may allow the user106to browse news articles published by a content publishing entity. A “Community” menu item304may allow the user106to interact with other users106of a video game service, such as friends and other users of a community. An “In-Game Augmented Reality” menu item304may allow the user106of the client machine100to access varies AR features provided by the AR system disclosed herein. One of those AR features depicted inFIG. 3is a “Channels” AR feature, which allows the user106to subscribe to different AR channels306that dictate the type of AR content120that is sent to the client machine100over the computer network218, and thereby made visible to the user106of the client machine100. In other words, the available AR content120maintained by the remote computing system200may be divided into AR channels306, and users106can subscribe to one or more of those AR channels306, and possibly switch between channels in order to view different types of AR content120during execution of a video game104. The AR channels306can also be thought of as “layers” of AR content120that are made visible or remain hidden based on the user's106channel subscriptions.

FIG. 3shows four example AR channels306(1)-(4) that a user106may subscribe to, with a “Subscribe” button next to each AR channel306to which the user106has not yet subscribed. For already-subscribed channels, an indication that the user106has already subscribed to the AR channel306may be provided next to the AR channel306, such as a “check box” next to the “Screen Shots” channel306(2), indicating that the user106has already subscribed to that AR channel306(2). For example, a user106can subscribe to a “Fan fiction” channel306(1) to make AR content120related to Fan Fiction visible during execution of a video game104(e.g., multiple informational messages may be placed throughout a game world that tell a story, and possibly give hints as to where to find a next informational message, similar to geocaching). The user106can subscribe to a “Screen Shots” channel306(2) to make AR content120comprised of screenshots visible during execution of a video game104. A user106may subscribe to a “Friends” channel306(3) to make AR content120associated with one or more of the user's106friends visible during execution of a video game104. This type of AR channel306(3) may be provided with a drop-down menu, or a similar selection mechanism, to select particular friends and/or groups of friends that the user106would like to subscribe to. The user106may subscribe to a “Pro Gamers” channel306(4) to make AR content120associated with one or more professional gamers visible during execution of a video game104. Again, a drop-down menu, or a similar selection mechanism, may be provided with this type of AR channel306(4) to select particular professional gamers or groups of gamers that the user106would like to subscribe to. These are merely examples of the types of AR channels306a user106may subscribe to.

In some embodiments, a user106may subscribe to an AR channel306selects, and makes visible, only the AR content120that is currently trending (e.g., an above-threshold amount of community activity with the AR content120, an above-threshold number of views or positive votes, trending AR content120among the user's106friends, etc.). In other words, one or more of the AR channels306may be configured to selectively make visible a subset of the AR content120for that AR channel120that might be of interest to the user106, based on a heuristic that indicates a certain level of interest in the AR content120from other users of a community. In some embodiments, the AR module212of the remote computing system200may be configured to offer a player106who is currently playing a video game an option to subscribe to an AR channel306on-the-fly, during gameplay. For instance, a friend of the player106may be currently playing a treasure hunt type of AR-based secondary game, and the player106using the client machine100may be presented with a pop-up option to subscribe to an AR channel306that provides AR content120for the same treasure hunt that his/her friend is enjoying at the moment. Upon subscribing to the AR channel306for the treasure hunt, the client machine100may download the AR content120that provides the treasure hunt as a secondary game that runs on top of the core video game104executing on the client machine100. As another example, a gamer106may be playing a video game104and may be streaming the video game to a community of users. The player106using the client machine100may see, within the game world of a video game104he/she is currently playing, an avatar of that gamer106presented within the video game world, along with a number of viewing users (e.g., 10 k, for 10,000 viewers) over the avatar, indicating that the AR content120is currently trending. The player106using the client machine100may click on the avatar or a subscribe button next to the avatar, to bring up a 2D or 3D broadcast of the gamer's106gameplay.

With specific reference again toFIG. 2, in response to the user106selecting a subscribe button associated with a particular AR channel306, the client machine100may send a subscription request to the remote computing system200over the computer network218. The remote computing system200, upon receiving the subscription request, may identify the AR content120that satisfies this filtering criterion (e.g., AR content120associated with the subscribed-to AR channel306) and may send this AR content120to the client machine100for use in generating augmented frames122with the AR content120. Accordingly, the client machine100receives the AR content120that satisfies one or more filtering criteria (e.g., AR content120associated with the subscribed-to AR channel306), and the client machine100may store the AR content120in local memory of the client machine100, and in association with relevant data (e.g., spatial data, such as game world coordinates226, indices228, camera orientation data, pixel data232, event data, etc. In this manner, the AR content120is retrievable from local memory whenever it is to be rendered in an augmented frame122, which can reduce latency and/or network bandwidth consumption.

In some embodiments, the one or more filtering criteria for filtering the AR content120that is ultimately sent to the client machine may further include filtering criteria that is meant to reduce local memory consumption and/or network bandwidth consumption. For instance, consider a scenario where the amount of AR content120associated with a particular game is so great that it is difficult to download all of the relevant AR content to local memory of the client machine100. In such a scenario, the relevant AR content120may be filtered based on recency (e.g., a creation date of the AR content120), popularity (e.g., based on a number of views, number of positive votes, etc.), the author of the AR content120(e.g., prioritizing AR content120created by friends over general users), user interests known about the user106(e.g., user interests specified in a user profile, based on usage history, etc.), amount of data (e.g., only download AR content120that is less than a threshold amount of data), and so on. In this manner, the most relevant AR content120may be selected and less relevant AR content120may be filtered out so that the client machine120receives only the most relevant AR content120.

Another AR feature depicted inFIG. 3is a “AR Viewer”308feature. This feature allows a user106to select the AR Viewer308link to invoke an AR viewer, which is considered herein to be a type of video game104that can display AR content120out of context of video game content124. For example, an AR viewer can present AR content120on a blank background or as an overlay on a SLAM-created 3D model(s) of a game world. In any case, the use of the AR Viewer308feature does not rely on loading a video game104, and thus, may be executed on a client machine100with processing resources that may not be suitable for running a PC game (e.g., a phone). In an illustrative example, a user106may want to see what his/her friend is seeing right now while playing a video game104that is augmented with AR content120. The user106, on his/her phone, may select the AR viewer308link. The AR viewer308link, and a sub-link to the friend's current game session cause a query to be submitted to the remote computing system200with a game ID, a location in the game world where the friend's game character is currently located, and a camera orientation. Based on this data, the remote computing system200may access the spatial database204and return AR content120that is visible at that location with that camera orientation. This AR content120can be presented in a series of frames on the user's phone (e.g., in 3D), and the user106can look around and see his/her friend's character running around the game world the friend is currently experiencing. In this manner, the user106can spectate on the AR content120that his/her friend is currently experiencing at a remote location.

The processes described herein are illustrated as a collection of blocks in a logical flow graph, which represent a sequence of operations that can be implemented in hardware, software, or a combination thereof. In the context of software, the blocks represent computer-executable instructions that, when executed by one or more processors, perform the recited operations. Generally, computer-executable instructions include routines, programs, objects, components, data structures, and the like that perform particular functions or implement particular abstract data types. The order in which the operations are described is not intended to be construed as a limitation, and any number of the described blocks can be combined in any order and/or in parallel to implement the processes.

FIG. 4is a flow diagram of an example process400for providing a content-creation interface for authors to create AR content, storing the created AR content in a spatial database, and sending select AR content to a requesting client machine based on one or more filtering criteria. For discussion purposes, the process400is described with reference to the previous figures.

At402, a remote computing system200may provide an interface (e.g., an API) for user computing devices220to create new AR content120. This content-creation interface may be usable by authors216using the user computing devices220to create any suitable type of AR content120, as described herein.

As shown by sub-block404, in some embodiments, providing the content-creation interface at block402may allow authors216to create one or more executable programs (e.g., plugins) that are configured to generate interactive AR content120based on video game data128provided as input to the executable program(s). Such video game data128may include, without limitation, spatial data specifying, among other things, a current location of a player-controlled character or some other object in the video game, event data indicating the occurrence of a game-related event (e.g., a bullet being fired, a game character entering or exiting a vehicle a doorway, etc.).

At406, the remote computing system200may receive, via the interface and from a user computing device220, instructions to create new AR content120. These instructions may include user input provided to the interface in the form of a graphical user interface that allows the author216to specify a type of AR content and other parameters relating to the AR content, such as a game ID224of a video game, special data227, such as game world coordinates226where the AR content120is to be presented within the game world of the video game, event data, and the like.

At408, the remote computing system200may store the new AR content120in the spatial database204. As shown by sub-block410, the storing of the new AR content120in the spatial database204may involve associating the new AR content120with associated data within the spatial database204. The associated data may include, without limitation, a game ID224of a video game, game world coordinates226within a game world of the video game, an index228to disambiguate the game world coordinates226if the game world coordinates226are otherwise ambiguous, camera orientation data (e.g., a camera orientation associated with a captured screenshot), pixel data232(e.g., pixel data associated with a captured screenshot), event data, and the like.

As shown by sub-block412, the storing of the new AR content120in the spatial database204may involve associating the new AR content120with one or more AR channels306, as described herein. In this manner, users106may subscribe to an AR channel306to receive particular AR content120that can be displayed within a game world of their video game.

At414, the remote computing system200may receive, over the computer network218from a client machine100, a subscription request from a user account to subscribe to an AR channel(s)306as a subscribed AR channel(s)306. Examples of AR channels306are described herein with reference toFIG. 3.

At416, the remote computing system200may receive, over the computer network218from the client machine100, video game data128associated with a video game104. This video game104may be a video game executing via the client machine100(e.g., by launching the video game client116on the client machine100and executing the video game104via the video game client116, causing the AR component102to send video game data128to the remote computing system200).

At418, the remote computing system200may identify, among the available AR content120stored in the spatial database204, game-specific AR content120based at least in part on the video game data128. For example, records of AR content120in the spatial database204may be associated with the game ID224for the particular video game104executing on the client machine100, spatial data227associated with a to-be-rendered portion of a game world, etc., and that AR content120may be identified at block418, as it is relevant to the particular video game104executing on the client machine100, and to the current state of the video game104, as indicated by the video game data104.

At sub-block420, the remote computing system200may filter the game-specific AR content120identified at block418based on the subscribed AR channel(s)306for the user account in question. This may result in a subset of the game-specific AR content120that is associated with the subscribed AR channel306. For example, the AR content120associated with the video game104in question may pertain to multiple AR channels306, so the AR channels306to which the user account in question has not subscribed can be used to filter out irrelevant AR content120, and the subset of the game-specific AR content120remaining may be that which is associated with the subscribed AR channel(s)306. At sub-block422, the remote computing system200may further filter the game-specific AR content120based on one or more filtering criteria to obtain the subset of the game-specific AR content120that is ultimately to be sent to the client machine100. An example filtering criterion may be satisfied at block422if a popularity of the subset of the game-specific AR content120is greater than a popularity threshold. In this example, the popularity can be determined based on at least one of a number of views or a number of positive votes associated with the subset of the game-specific AR content120, and/or based on a user's prior activity, or the user's social network relationships. Other filtering criteria, as described herein are also contemplated, such as filtering the game-specific AR content120based on access rights associated with the user account, based on the creation date of the AR content120(filter out AR content older than a threshold age), based on user interests associated with the user account, based on a time of day, based on a geographic location associated with the client machine100or the user account, and so on.

At424, the remote computing system200may send, over the computer network218, the subset of the game-specific AR content120to the requesting client machine100. In this manner, the client machine100is provided access to relevant AR content120that can be rendered “on top of” video game content124in an augmented frame122during execution of a video game. It is to be appreciated that at least blocks416-424can occur in real-time during game execution to provide real-time AR content120to the client machine100over the network218. In some embodiments, the process400is performed close to a time when the video game starts executing so that the client machine100receives filtered game-specific AR content120that can be stored locally for access during game execution.

In some embodiments, at block418, the remote computing system200may further identify, among the available AR content stored in the spatial database204and based at least in part on the video game data128, an additional subset of the game-specific AR content120that is associated with an unsubscribed AR channel306. In this scenario, at block424, the remote computing system200may also send, over the computer network, the additional subset of the game-specific AR content120to the client machine100for presentation on the client machine100in a manner that visually distinguishes the additional subset of the game-specific AR content from the subset of the game-specific AR content. That is, AR content120to which the user106has not yet subscribed may be presented in a subtle manner (e.g., via relatively small icons that indicate AR content is available, if the user is interested to click on it). This unsubscribed AR content may be spatially-relevant based on the video game data128.

FIG. 5is a flow diagram of an example process500for receiving AR content from a remote computing system200and rendering frames, including augmented frames122that include the AR content120, during execution of a video game104. For discussion purposes, the process500is described with reference to the previous figures.

At502, a client machine100may send, over a computer network218to a remote computing system200, a subscription request to subscribe to an AR channel(s)306as a subscribed AR channel(s)306. Examples of AR channels306are described herein with reference toFIG. 3.

At504, the client machine100may start execution of a video game104. For example, a user106of the client machine100may launch the video game client116and may select a video game104from a video game library118to start execution of the video game104.

At506, the client machine100may send, over the computer network218to the remote computing system200, a request that includes a game ID224of the video game104. This may be automated logic responsive to the user106starting execution of a video game104on the client machine100. The game ID224allows the remote computing system200to lookup game-specific AR content120that is associated with the game ID224.

At508, the client machine100may receive, over the computer network218, a subset of game-specific AR content120from the remote computing system200along with associated data. The subset of the game-specific AR content120received at block508may have been selected by the remote computing system200based at least in part on the game ID224and the subscribed AR channel(s)306. Thus, the subset of the game-specific AR content120may be associated with both the video game104and with the subscribed AR channel(s)306.

At sub-block510, the client machine100may store, in local memory of the client machine100, the subset of the game-specific AR content120in association with the associated data. The associated data may include, without limitation, spatial data227, pixel data232, event data, and the like, which may be used to determine when to render particular AR content120in an augmented frame122during game execution.

At512, the client machine100, via the video game client116, may render a series of frames during execution of the video game104. For example, as the video game104executes in a first process (or thread(s)), the video game104may output video game content124in a series of frames. The operations that may be iteratively performed at block512to render individual frames in the series of frames is shown by sub-blocks514-524.

At514, an AR component102—executing via the video game client116on the client machine100as a separate process (e.g., or thread(s)) from the video game104—may obtain, from the video game104, video game data128about a current state of the video game104. This video game data128may be spatial data227that relates to game world coordinates226within the game world of the video game104. For example, the spatial data227may specify a current location of a player-controlled character within the game world of the video game104, a set of game world coordinates226for a portion of the game world that is to be rendered in the upcoming frame, an index228, a current orientation of a virtual camera associated with the player-controlled character, and the like. The video game data128may be event data specifying a game-related event associated with the video game104. In some embodiments, obtaining the video game data128from the video game104at block514may include the AR component102receiving the video game data128from the video game104as part of a function call made by the video game104to request the AR content120from the AR component102in order to render the frame as the augmented frame122, if necessary.

At516, the frame may be rendered as a normal frame without any AR content120(i.e., exclusively with video game content124). At518, the AR component102may determine whether AR content120can be identified based on the video game data128. For example, the AR component102may compare at least a portion of the video game data128it obtained at block514to the associated data stored with the accessible AR content120(e.g., the subset of the game-specific AR content120the client machine100received at block508). If the AR component102determines that there is no identifiable AR content120to be displayed, the process500may follow the “NO” route from block518to proceed to the next frame at block520. If, on the other hand, the AR component102determines that there is identifiable AR content120to be displayed, the AR component102identifies the AR content120based at least in part on the video game data120(e.g., by comparing the video game data128to the associated data stored with the AR content120), and the process500may follow the “YES” route from block518to block522. The identification of the AR content120may include retrieving/receiving the AR content120from a source other than the video game104(e.g., from local memory of the client machine100, from the remote computing system200in real time, etc.). In the case of real-time retrieval of AR content120over the network218, the receiving of the AR content at block508may occur in conjunction with (e.g., in response to) the identification of the AR content120at block518.

At522, the client machine100, via the AR component102, may generate an augmented frame122that includes the video game content124output by the video game104and the AR content120identified and retrieved at block516. As mentioned, the generation of the augmented frame122may include overlaying the AR content120on the video game content124. In some embodiments, the operations at block522include obtaining depth data associated with the video game content124for the given frame, and merging the video game content124and the AR content120in the augmented frame122based at least in part on the depth data. For instance, AR content120may be layered over some video game content124(e.g., opaque graphics) and under other video game content124(e.g., translucent graphics).

At524, the client machine100, via the AR component102, may render the augmented frame122on a display(s)110associated with the client machine100, and may then proceed to the next frame at block520to iterate blocks514-524, as shown by the arrow from block520to block514.

The player106can effectively change the AR channel306at any time during the process500by subscribing to one or more different AR channels306, which causes the client machine100to send a new subscription request to the remote computing system200so that different or additional AR content120can be received from the remote computing system200based on the newly subscribed AR channel(s)306. In this manner, the player106can switch between AR channels306during gameplay to have different AR content120presented in augmented frames122during execution of the video game.

FIG. 6is a flow diagram of an example process600for augmenting a frame with spatially-relevant AR content120during execution of a video game104. The operations of the process600may be involved in rendering an individual frame as an augmented frame in the series of frames that are rendered during the execution of the video game. For discussion purposes, the process600is described with reference to the previous figures.

At602, an AR component102—executing via the video game client116on a client machine100—may obtain, from the video game104, video game data128in the form of spatial data227about a current state of the video game104. For example, the spatial data227obtained at block602may relate to, without limitation, current game world coordinates, such as the current coordinates (i.e., current location) of a player-controlled character126within the game world of the video game104, a set of game world coordinates226for a portion of the game world that is to be rendered in the upcoming frame, a game ID224, a state228, a current camera orientation of a virtual camera (e.g., a virtual camera associated with the player-controlled character126), or any combination thereof.

At604, the AR component102may identify AR content120from available AR content (e.g., a subset of game-specific AR content120received from a remote computing system200) based at least in part on the spatial data227.

At sub-block606, the AR content120can be identified based at least in part on the game world coordinates226in the received spatial data227. For example, the AR component102may identify AR content120that is associated with game world coordinates226that are included in the set of game world coordinates226in the spatial data227, or that are within a threshold distance from a current location of the player-controlled character126and within a field of view of the game character126, as determined from the current orientation of the virtual camera.

At sub-block610, the AR content120can be identified based at least in part on the game ID224specified in the spatial data227. For example, the AR component102may identify AR content120that is associated with a game ID that matches the game ID224in the spatial data227. This game ID may be usable to disambiguate between multiple instances of game world coordinates226, if the game world includes multiple instances of the game world coordinates226specified in the spatial data227.

At612, the client machine100, via the AR component102, may generate an augmented frame122that includes t the AR content120identified (and retrieved) at block604(and possibly the video game content124output by the video game104). As shown by sub-block614, in the case of the identified AR content120being a 3D screenshot (e.g., an image with depth data), the generation of the augmented frame122at block612may include constructing a 3D model of a portion of a game world exhibited in the 3D screenshot based at least in part on the data included in the 3D screenshot (e.g., the 2D array of pixel data plus depth data from a depth buffer). The resulting 3D model can be overlaid on video game content124within the augmented frame122at block612. For instance, the identified AR content120may be a 3D screenshot of the game world of the video game104that is currently executing on the client machine, or a different game world of a different video game. In either case, the 3D screenshot may have been captured by another gamer106, perhaps at an earlier time, capturing the full context of the game world in that instant when the 3D screenshot was captured. The constructed 3D model that is overlaid on the video game content124as AR content120may allow the player106of the currently-executing video game104to navigate around the constructed 3D model (e.g., move around objects captured in the 3D screenshot) to see the “slice” of the game world captured by the other gamer106, and in a way that mimics what the other gamer106saw at the time the 3D screenshot was captured. As shown by sub-block616, in the case of the identified AR content120being a plurality of sequential 3D screenshots, the generation of the augmented frame122at block612may include starting playback of a 3D video based at least in part on the plurality of sequential 3D screenshots.

At618, the client machine100, via the AR component102, may render the augmented frame122on a display(s)110associated with the client machine100, and may then proceed to the next frame as part of an iterative process of rendering a series of frames during execution of the video game104.

FIG. 7is a flow diagram of an example process700for augmenting a frame with dynamic and/or interactive AR content120during execution of a video game104. The operations of the process700may be involved in rendering an individual frame as an augmented frame in the series of frames that are rendered during the execution of the video game. For discussion purposes, the process700is described with reference to the previous figures.

At702, an AR component102—executing via the video game client116on a client machine100—may obtain video game data128about a current state of a video game104, as described herein. As shown by sub-block704, obtaining the video game data128may include obtaining a 3D model of a to-be-rendered portion of the game world in the upcoming frame. This 3D model may be retrieved from the remote computing system200, which may have previously generated 3D models for the game world of the video game based on dense, temporally contiguous video streams (e.g., including color data with, or without, depth buffer data) and stored the 3D models for client machines100to access on demand. A SLAM process may be performed offline and may be used to reconstruct game world geometry (e.g., 3D models of game worlds) incrementally from many images. This backend process may be done by a service provider of the video game platform, by game developers, and/or by crowd-sourcing game world images from player client machines100.

At706, the AR component102may identify AR content120from available AR content (e.g., a subset of game-specific AR content120received from a remote computing system200) based at least in part on the video game data128. In order to identify the AR content120, the AR component102may lookup a record of AR content120from available AR content using the video game data128and may determine that the record of AR content120provides an executable program that is configured to generate dynamic and/or interactive AR content120. This executable program may have been created by using a plugin, as described herein.

At sub-block708, the AR component102may provide the video game data128(e.g., the 3D model obtained at block704) as input to the executable program that is configured to generate the dynamic and/or interactive AR content120based at least in part on the video game data128. In some embodiments, the AR component102may create a security sandbox, load one or more executable programs or plugins (e.g., DLLs) that correspond to an AR channel(s) to which the user106of the client machine100is subscribed, and provide video game data128as input to the plugins to have the plugins run their respective logic and return AR content120. For example, there could be a folder of DLLs, each DLL representing a different plugin. When the user106subscribes to an AR channel(s), the AR component102may load the corresponding DLL(s) within a security sandbox, and then run through the process700where, at sub-block708, the AR component102may provide video game data128as input to the corresponding DLL(s) that have been loaded.

At sub-block710, the AR component102may receive, as output from the executable program(s), the identified AR content as the dynamic and/or interactive AR content120. The AR component102may also receive, as output from the executable program(s), position data for positioning the dynamic and/or interactive AR content120within an augmented frame122, the position data based at least in part on the 3D model of the to-be-rendered portion of the game world.

At712, the client machine100, via the AR component102, may generate an augmented frame122that includes the video game content124output by the video game104and the AR content120identified at block706. By providing the 3D model as input to the executable program, dynamic (e.g., moving and/or animating) AR objects can be automatically injected into the game world in a manner that is sensible with respect to the geometry and/or topology of the game world. For instance, a moving or animating AR game character may be move within the game world by avoiding collisions with virtual objects in the game world, such as barriers, walls, doors, etc., and/or positioning AR content against walls, on the floor, etc.

At718, the client machine100, via the AR component102, may render the augmented frame122on a display(s)110associated with the client machine100, and may then proceed to the next frame as part of an iterative process of rendering a series of frames during execution of the video game104.

FIG. 8is a diagram illustrating an example technique for adding multiplayer aspects to a single player video game using the disclosed AR system. InFIG. 8, a first client machine100(1) and a second client machine100(2) are each connected to a computer network218in order to exchange data with the remote computing system200(not shown inFIG. 8) and with other client machines100over the computer network218. In the example ofFIG. 8, a first player106(1) is playing a video game104on the first client machine100(1), and a second player106(2) is also playing the same video game104on the second client machine100(2). The video game104can be independently executed on each client machine100without any networking code such that there is no reliance on a network connection to execute the video game104. In this sense, the video game104executing on each client machine may, in some examples, be a single player video game that the individual players106can play by themselves.

The network connection, however, enables the transfer of data800via the remote computing system200and between client machines100that are executing the same single player video game104, as shown inFIG. 8. This allows for adding multiplayer aspects to a single player video game without any reliance on the game developer to accommodate multiplayer aspects into their video game, which can be a costly endeavor. Instead, the code of the video game104may be configured to iteratively emit data800(e.g., emit data800every frame, every couple of frames, every few frames, etc.) about a current state of the video game104. This data800emitted from the video game104executing on the second client machine100(2) can be transmitted over the computer network218to the first client machine100(1) that is executing a local AR component102. The AR component102on the first client machine100(1) may receive the data800sent from the second client machine100(2) and may process the data800to generate an augmented frame122that includes AR content120(1) that adds multiplayer aspects to a first instance of the game world rendered on the first client machine100(1). For example, a ghost image of the second player's106(2) game character can be rendered on the screen of the first client machine100(1) as AR content120(1) to enable a multiplayer aspect like “speed running” where the first player106(1) can see where the second player's106(2) game character is relative to the first player's106(1) game character. In this manner, the first player106(1) can compete with the second player106(2) without having to implement multiplayer aspects into the code of the video game104itself. These multiplayer aspects can be added in real-time by sending the data800over the network218in real-time as the video game104is played on each client machine100. Additionally, or alternatively, a game performance of the second player106(2) can be saved and replayed as an AR content stream during execution of the video game104on the first client machine100(1) at a later time. In this manner, the AR content120(1) that is retrieved and used to augment the frames during gameplay on the first client machine100(1) may be a live or replayed stream of a second player's106(2) game performance in the same video game104. In this manner, a first player106(1) can compete in real-time, and/or the first player106(1) can keep practicing over and over against a replay of the second player's106(2) game performance. In some embodiments, frames, including the augmented frames122, rendered on the first client machine100(1) may be saved and uploaded to the remote computing system200as a video clip so that others can replay the competitive experience, as seen by the first player106(1).

In another aspect, the game world802(2) can be stitched into the game world802(1) on the first client machines100(1) to effectively mix the two game worlds802(1) and802(2) together in a manner that aligns the coordinate systems of the two game worlds802(1) and802(2). For example, if the two player controlled characters826(1) and826(2) are located near each other (e.g., within a threshold distance) within the same game world of the same video game104, at the same time, the first client machine100(1) can receive, from the second client machine100(2), a 3D screenshot(s) of the second game world802(2) along with a set of coordinates for that portion(s) of the second game world802(2) depicted in the 3D screenshot(s), and the AR component102of the first client machine100(1) may align the graphics in the received screenshot with the current set of coordinates in first game world802(1) to mix the two game worlds together. For example, the first player106(1) may be able to see an object as AR content120(2) that is seen by the second player106(2) in the second game world802(2). The differences between the two game worlds802(1) and802(2) in the augmented frame122can be indicated visually through highlighting of objects or graphics rendered as AR content120(e.g., using a different color, etc.). InFIG. 8, the differences between the two game worlds802(1) and802(2) is shown in the augmented frame122by the objects from the second game world802(2) shown in dashed lines, while the objects in the first game world802(1) are shown in solid lines. A visual distinction may help the first player106(1) distinguish between video game content124and AR content120, when the AR content120is stitched into the first game world1002(1) in a way that would otherwise make it difficult to discern what is AR content120in the augmented frame122.

To illustrate the operation of the technique shown inFIG. 8, reference is now made toFIG. 9, which is a flow diagram of an example process900for using an AR system to add multiplayer aspects to a single player game through the exchange of data between client machines100over a computer network218. Consider, a scenario where a first instance of a game world802(1) is being rendered on the first client machine100(1) during execution of the video game104on the first client machine, and a second instance of the same game world802(2) is being rendered on the second client machine100(2) during an independent execution of the same video game104on the second client machine100(2). As shown inFIG. 8, during the independent execution of the video game104on each client machine100, the first player106(1) may control a first player-controlled character826(1), while the second player106(2) may control a second player-controlled character826(2).

At902, the first client machine100(1) may receive, over a computer network218from a second client machine100(2), data800in the form of spatial data227that specifies a current location of a second player-controlled character826(1) within the game world rendered on the second client machine100(2) as a second instance of the game world802(2).

At904, in order to render an augmented frame122on the first client machine100(1) (e.g., as shown inFIG. 8by reference numeral122), the AR component102executing on the first client machine100(1) may obtain video game data128from the video game104executing on the first client machine100(1), the video game data128in the form of spatial data227that specifies first game world coordinates226associated with a portion of a game world of the video game104that is to be rendered on the first client machine100(1) as a first instance of the game world802(1).

At906, the AR component102of the first client machine100(1) identifies AR content120to use in generating an augmented frame122. As shown by sub-block908, the AR content120can be identified by determining that the current location of the second player-controlled character826(2) is within the portion of the game world that is to-be-rendered on the screen of the first client machine100(1) based at least in part on the first game world coordinates226. As shown by sub-block910, the AR content120can be retrieved as an AR avatar of the second player-controlled character826(2). For instance, the first client machine100(1) may receive AR content120in the form of an AR avatar from the remote computing system200(prior to, or during, execution of the video game104). In this manner, a record of AR content that includes the AR avatar is accessible to the first client machine100(1).

At912, the first client machine100(1), via the AR component102, may generate an augmented frame122that includes the video game content124output by the video game104executing on the first client machine100(1) and the AR content120identified (and retrieved) at block906. As shown by sub-block914, the generation of the augmented frame122may include presenting the AR avatar of the second player-controlled character826(2) as AR content120on the video game content124at a location within the first instance of the game world802(1) that corresponds to the current location of the second player-controlled character826(2).

At916, the first client machine100(1), via the AR component102, may render the augmented frame122on a display(s)110associated with the first client machine100(1), and may then proceed to the next frame as part of an iterative process of rendering a series of frames during execution of the video game104. An example of this augmented frame122is shown inFIG. 8, where the AR content120in the augmented frame122is the AR avatar of the second player-controlled character826(2), at a location that corresponds to that game character's current location in the game world. This allows for multiplayer aspects to be added as an augmentative feature in the video game that is executing on the first client machine100(1), which may be a single player game.

FIG. 10is a diagram illustrating an example technique for using the disclosed AR system to share aspects of game worlds between client machines. InFIG. 10, a first client machine100(1) and a second client machine100(2) are each connected to a computer network218in order to exchange data with the remote computing system200(not shown inFIG. 10) and with other client machines100over the computer network218. In the example ofFIG. 10, a first player106(1) is playing a video game104on the first client machine100(1), and a second player106(2) is playing a video game104on the second client machine100(2). The video game104executing on each client machine100can be the same video game or different video games, and they may be single player or multiplayer video games.

The network connection enables the transfer not only data, but also AR content120via the remote computing system200and between the client machines100. For example, the AR content120sent from the second client machine100(2) to the first client machine100(1) may represent screenshots (e.g., 2D or 3D screenshots) of a portion of a second game world1002(2) being rendered on the second client machine100(2). This AR content120can be iteratively transmitted (e.g., every frame, every couple of frames, every few frames, etc.) during execution of a video game104on the second client machine100(2). The AR component102on the first client machine100(1) may receive the AR content120sent from the second client machine100(2) and may use the AR content120to generate an augmented frame(s)122that includes the received AR content120. Thus, the real-time transmission, over the network218, of AR content120in the form of screenshots allows for sharing of games worlds between client machines, such as the sharing of the second game world1002(2) within the first game world1002(1) that is being rendered on the first client machine100(1). For example, as the second player106(2) controls the movement of a second player-controlled character1026(2) around the second game world1002(2), screenshots of the second game world1002(2), as seen from the perspective of the second player-controlled character1026(2), can be sent as AR content120over the network218to the first client machine100(1). As the first player106(1) controls the movement of a first player-controlled character1026(1) around the first game world1002(1), the received screenshots can be presented in augmented frames122as AR content120, such as by rendering the screenshots through a “portal” on a wall in the first game world1002(1). This can enable different types of functionality relating to the sharing of game worlds between client machines.

For example, the AR content120may provide the first player106(1) with a viewport into the second game world1002(2). In this scenario, one or more screenshots of the second game world1002(2), as seen from the perspective of the second player-controlled character1026(2) may be transmitted to the first client machine100(1) for display as AR content120within the first game world1002(1). A series of screenshots can be transmitted over the network218to enable live spectating of the second player's106(2) gameplay. For example, a first player106(1) may be able to spectate the second player's106(2) gameplay through the AR content120that is presented as a viewport into the second game world1002(2). The series of screenshots transmitted over the network218can be presented as a live AR video stream via the AR component102on the first client machine100(1) so that the first player106(1) can watch the second player-controlled character1026(2) move around the second game world1002(2). In order to enable a 3D viewport into the second game world1002(2), the first client machine100(1) may receive AR content120in the form of a 3D screenshot(s), and the AR component102on the first client machine100(1) may construct a 3D model of the game world1002(2) depicted in the 3D screenshot and/or present the 3D screenshot positioned based on a camera orientation associated with the 3D screenshot. This can allow the first player106(1) to look around and/or move around the reconstructed 3D model of the second game world1002(2) to see even more detail about the environment of the second player-controlled character1026(2).

Another example of sharing game worlds between client machines100involves stitching or mixing two game worlds1002(1) and1002(2) together in a manner that aligns the coordinate systems of the two game worlds1002(1) and1002(2). For example, if the two player controlled characters1026(1) and1026(2) are located near each other (e.g., within a threshold distance) within the same game world of the same video game104, at the same time, the first client machine100(1) can receive, from the second client machine100(2), a 3D screenshot(s) of the second game world1002(2) along with a set of coordinates for that portion(s) of the second game world1002(2) depicted in the 3D screenshot(s), and the AR component102of the first client machine100(1) may align the graphics in the received screenshot with the current set of coordinates in first game world1002(1) to mix the two game worlds together. Imagine the first player-controlled character1026(1) located at the same location within a game world as the second player-controlled character1026(2), and the first player106(1) being able to see a 3D rendering of an enemy (AR content120) seen by the second player106(2) in the second game world1002(2). The differences between the two game worlds1002(1) and1002(2) in the augmented frame122can be indicated visually through highlighting of objects or graphics rendered as AR content120(e.g., using a different color, etc.). This may help the first player106(1) distinguish between video game content124and AR content120, when the AR content120is stitched into the first game world1002(1) in a way that would otherwise make it difficult to discern what is AR content120in the augmented frame122.

To illustrate the operation of the techniques described with reference toFIG. 10, reference is now made toFIG. 11, which is a flow diagram of an example process1100for using an AR system to share aspects of game worlds between client machines.

At1102, an AR component102—executing via the video game client116on a first client machine100(1)—may obtain video game data128from a video game104executing on the first client machine100(1), as described herein.

At1104, the AR component102of the first client machine100(1) identifies AR content120to use in generating an augmented frame122. As shown by sub-block1106, the AR content120can be identified by receiving, over a computer network218from a second client machine100(2), the AR content120. For example, the AR content120may be in the form of a screenshot(s) of a portion of a game world1002(2) rendered on the second client machine100(2), which may be the same game world or a different game world as the game world1002(1) rendered on the first client machine100(1). In some embodiments, along with the AR content120, the first client machine100(1) may receive additional data800from the second client machine100(2), such as spatial data227that relates to game world coordinates226of the second game world1002(2) rendered on the second client machine100(2). This may include the coordinates that correspond to the current location of the second player-controlled character1026(2) within the second game world1002(2).

At1108, the first client machine100(1), via the AR component102, may generate an augmented frame122that includes the video game content124output by the video game104and the AR content120identified (and received) at block1104. In some embodiments, the video game data128obtained at block1102may allow the AR content120to be provided in context of the game world1002(1) that is in the to-be-rendered augmented frame122, such as by locating the AR content120in the form of a screenshot on a wall in the game world1002(1), or otherwise attaching the AR content120to an object in the game world1002(1). As shown by sub-blocks1110,1112, and1114, the generation of the augmented frame122can be implemented in various ways.

At sub-block1110, the AR component102of the first client machine100(1) may construct a viewport into the game world1002(2) rendered on the second client machine100(2) based at least in part on the AR content120(e.g., the screenshot(s)) received at block1106. For example, a 3D screenshot can be used to reconstruct a 3D view of the second player's106(2) game world1002(2) within a portal rendered as an AR object in the first game world1002(1). A series of screenshots can be rendered much like a video stream of AR content120to provide a live viewport that can be used for spectating the second player's106(2) gameplay. In some embodiments, the viewport constructed at sub-block1110may include other AR content120seen by the second player106(2) of the second client machine100(2) within the second game world1002(2).

At sub-block1112, the AR component102of the first client machine100(1) may stitch the game world1002(2) rendered on the second client machine100(2) together with the game world1002(1) rendered on the first client machine100(1) based at least in part on game world coordinates226received from the second client machine100(2). For example, the current game world coordinates can be aligned, if both instances of the game world are within a threshold distance from a common location within the game world. In some embodiments, the AR content120is in the form of a 3D screenshot of the game world1002(2), which can be used to obtain a camera pose associated with this 3D screenshot. This camera pose can be used to align and/or orient other AR content120(e.g., objects, game characters, etc.) within the game world1002(1).

At1116, the first client machine100(1), via the AR component102, may render the augmented frame122on a display(s)110associated with the first client machine100(1), and may then proceed to the next frame as part of an iterative process of rendering a series of frames during execution of the video game104.

FIG. 12is a flow diagram of an example process1200for exchanging events between a video game and a separately executing AR component on a client machine.

At1202, an AR component102may be executed via the video game client116on a client machine100as a separate process from a video game104executing on the client machine100.

At1204, the AR component102may receive one or more game-emitted events from the video game104. For example, a game developer may specify game events that the video game will emit during execution, and authors216of AR content120can subscribe to those “game-emitted events” to learn about what is happening in the video game104for purposes of generating AR content120and returning game-accepted events to the video game104.

At1206, the AR component102may identify AR content120. This identification may be based on providing the game-emitted event(s) as input to an executable program(s) (e.g., a plugin(s)). In some embodiments, the identification may additionally, or alternatively, be based on video game data128(e.g., spatial data227) obtained from the video game104.

At1208, the AR component102may send one or more game-accepted events to the video game104in response to the receiving of the one or more game-emitted events. Again, the game developer may specify events that the video game104is capable of accepting. In some embodiments, the executable program(s) (e.g., a plugin(s)) may output these game-accepted events based at least in part on the game-emitted events provided as input.

At1210, the client machine100, via the AR component102, may generate an augmented frame122that includes video game content124output by the video game104and the AR content120identified at block1206.

At1212, the client machine100, via the AR component102, may render the augmented frame122on a display(s)110associated with the client machine100, and may then proceed to the next frame as part of an iterative process of rendering a series of frames during execution of the video game104.

Although the subject matter has been described in language specific to structural features, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features described. Rather, the specific features are disclosed as illustrative forms of implementing the claims.

Claims

  1. A system comprising: one or more processors;and memory storing computer-executable instructions that, when executed by the one or more processors, cause performance of operations comprising: executing a video game that outputs video game content in a series of frames, the video game content representing a game world of the video game;identifying augmented reality (AR) content based at least in part on video game data about a current state of the video game;generating an augmented frame that includes the AR content rendered within the game world;and rendering the augmented frame for display.
  1. The system of claim 1 , wherein the identifying of the AR content comprises: providing the video game data as input to one or more executable programs;and receiving, as output from the one or more executable programs, the AR content.
  2. The system of claim 1 , wherein the video game data specifies game world coordinates associated with a to-be-rendered portion of the game world, and wherein the AR content is identified based at least in part on the game world coordinates.
  3. The system of claim 3 , wherein the video game data further specifies a game identifier (ID) of the video game, and wherein the AR content is identified based at least in part on the game ID.
  4. The system of claim 1 , wherein the video game data specifies a current orientation of a virtual camera associated with a player-controlled character, and wherein the AR content is identified based at least in part on the current orientation of the virtual camera.
  5. The system of claim 1 , wherein the AR content is a screenshot of at least one of a portion of the game world or a portion of a different game world of a different video game.
  6. The system of claim 1 , wherein the AR content is a three-dimensional (3D) screenshot, and wherein the generating of the augmented frame comprises: constructing a 3D model from the 3D screenshot based at least in part on depth buffer data associated with the 3D screenshot;and overlaying the 3D model on at least the portion of the video game content within the augmented frame.
  7. The system of claim 1 , wherein the AR content is a plurality of sequential three-dimensional (3D) screenshots, and wherein the generating of the augmented frame comprises starting playback of a 3D video based at least in part on the plurality of sequential 3D screenshots.
  8. The system of claim 1 , the operations further comprising determining the video game data by an AR component executing as a separate process from the video game.
  9. A method comprising: executing, by one or more processors, a video game;identifying, by the one or more processors, augmented reality (AR) content based at least in part on video game data about a current state of the video game;generating, by the one or more processors, an augmented frame that includes the AR content rendered within a game world of the video game;and rendering, by the one or more processors, the augmented frame for display.
  10. The method of claim 10 , wherein the identifying of the AR content comprises: providing the video game data as input to an executable program;and receiving, as output from the executable program, the AR content.
  11. The method of claim 10 , further comprising determining the video game data by an AR component executing as a separate process from the video game.
  12. The method of claim 10 , wherein the video game data comprises spatial data associated with a to-be-rendered portion of the game world.
  13. The method of claim 13 , wherein the spatial data specifies at least one of: game world coordinates associated with the to-be-rendered portion of the game world;or a current orientation of a virtual camera associated with a player-controlled character.
  14. The method of claim 10 , wherein the augmented frame is rendered for display on a first client machine, wherein the AR content comprises a screenshot of a portion of the game world, or a different game world, rendered on a second client machine, and wherein the generating of the augmented frame comprises: based at least in part on the screenshot, constructing a viewport into the game world, or the different game world, rendered on the second client machine.
  15. A method comprising: receiving, by one or more processors, video game data associate with a video game;identifying, by the one or more processors, augmented reality (AR) content based at least in part on the video game data;and sending, by the one or more processors, the AR content to a client machine to generate an augmented frame for the video game that includes the AR content rendered within a game world of the video game.
  16. The method of claim 16 , wherein the identifying of the AR content comprises: providing the video game data as input to one or more executable programs;and receiving, as output from the one or more executable programs, the AR content.
  17. The method of claim 16 , wherein the video game data comprises spatial data associated with a to-be-rendered portion of the game world.
  18. The method of claim 16 , wherein: the identifying of the AR content comprises identifying a subset of available AR content that is associated with a subscribed AR channel;and the sending of the AR content to the client machine comprises sending the subset of the AR content to the client machine.
  19. The method of claim 19 , wherein: the identifying of the AR content comprises identifying an additional subset of the available AR content that is associated with an unsubscribed AR channel;and the sending of the AR content to the client machine comprises sending the additional subset of the AR content to the client machine for presentation on the client machine in a manner that visually distinguishes the additional subset of the AR content from the subset of the AR content.

Disclaimer: Data collected from the USPTO and may be malformed, incomplete, and/or otherwise inaccurate.