U.S. Pat. No. 11,938,398

LIGHT FIELD DISPLAY SYSTEM FOR VIDEO GAMES AND ELECTRONIC SPORTS

AssigneeLight Field Lab, Inc.

Issue DateMay 31, 2022

Illustrative Figure

Abstract

A light field (LF) display system implements a holographic video game. The LF display system includes an LF display assembly that displays holographic game content. The LF display system can also include a sensory feedback assembly that provides tactile feedback to users by projecting an ultrasonic wave, a tracking system that can track one or more body parts of a user, and a controller that executes a game application and generates display instructions for the LF display assembly. The LF display system can implement an interactive video game that tracks a body part of a player and provides both visual and haptic feedback to the player to depict an in-game interaction, such as an explosion or impact. The video game may be implemented as part of an LF gaming network.

Description

DETAILED DESCRIPTION Overview A light field (LF) display system is implemented in a presentation space for a gaming application. The LF display system is configured to present video game data generated by the gaming application as holographic content. For example, a gaming application may be a third-person action adventure video game, a sports video game, a digital implementation of a board game, or some other gaming application. The LF display system comprises a LF display assembly configured to present holographic content including one or more holographic objects that would be visible to one or more viewers in a viewing volume of the LF display system. The holographic objects may be video game data and/or video game content and may include a virtual player, a setting, a scene, graphics information, game state information, etc. A LF display assembly may form a multi-sided seamless surface over some or all of one or more surfaces (e.g., a court) in the presentation space. The LF display system can present holographic content to viewers in the presentation space. A viewer is generally located in the presentation space, but may be any person in a location that can view the holographic content in the presentation space. Further, while generally referred to as a “viewer” herein, a viewer may also be any person interacting with (e.g., viewing, executing, inputting to, receiving output from etc.) a gaming application that generates video game data which can be represented as holographic content in the presentation space. A holographic object of the holographic content may also be augmented with other sensory stimuli (e.g., tactile and/or audio). For example, ultrasonic emitters in the LF display system may emit ultrasonic pressure waves that provide a tactile surface for some or all of the holographic object. Holographic content may include additional visual content (i.e., ...

DETAILED DESCRIPTION

Overview

A light field (LF) display system is implemented in a presentation space for a gaming application. The LF display system is configured to present video game data generated by the gaming application as holographic content. For example, a gaming application may be a third-person action adventure video game, a sports video game, a digital implementation of a board game, or some other gaming application. The LF display system comprises a LF display assembly configured to present holographic content including one or more holographic objects that would be visible to one or more viewers in a viewing volume of the LF display system. The holographic objects may be video game data and/or video game content and may include a virtual player, a setting, a scene, graphics information, game state information, etc. A LF display assembly may form a multi-sided seamless surface over some or all of one or more surfaces (e.g., a court) in the presentation space. The LF display system can present holographic content to viewers in the presentation space. A viewer is generally located in the presentation space, but may be any person in a location that can view the holographic content in the presentation space. Further, while generally referred to as a “viewer” herein, a viewer may also be any person interacting with (e.g., viewing, executing, inputting to, receiving output from etc.) a gaming application that generates video game data which can be represented as holographic content in the presentation space.

A holographic object of the holographic content may also be augmented with other sensory stimuli (e.g., tactile and/or audio). For example, ultrasonic emitters in the LF display system may emit ultrasonic pressure waves that provide a tactile surface for some or all of the holographic object. Holographic content may include additional visual content (i.e., 2D or 3D visual content). The coordination of emitters to ensure that a cohesive experience is enabled is part of the system in multi-emitter implementations (i.e., holographic objects providing the correct haptic feel and sensory stimuli at any given point in time.)

In some embodiments, the LF display system includes a plurality of LF display modules that form a game display area (e.g., a game setting, a representation of a virtual world, a scene experienced by a player or an observer within a game, etc.) in the presentation space. The LF display modules forming the game display area may be configured to project holographic content of a gaming application to viewers in the viewing volume of the LF display within the presentation space. In this disclosure, it is assumed that a ‘viewer in the presentation space’ refers to a viewer in one of the viewing volumes of the LF display system within the presentation space. Viewing volumes are described in greater detail below. In this manner, a viewer in the presentation space can perceive holographic video game data generated by the gaming application in the game display area. For example, the LF display system may display a figure in a multiplayer online battle arena game, a battle royale game, a real-time strategy game, a simulation game, or any other content associated with a gaming application. In some embodiments, the LF display system may create holographic content for display to viewers in the presentation space. For example, the LF display system may create an expanded virtual setting in the presentation space that adds to the experience of the holographic video game data presenting in the game display area. In an embodiment, the presentation space reproduces a portion of a sporting arena. In an embodiment, the presentation space may be located in an arena, or may be located at a bar, a home entertainment system, a table-top presentation system, or a public presentation system.

In some embodiments, the LF display system may include elements that enable the system to emit at least one type of energy, and, simultaneously, absorb at least one type of energy (for creating holographic content). For example, a LF display system can emit both holographic objects for viewing as well as ultrasonic waves for generation of a tactile surface for haptic perception. Simultaneously, the LF display system can absorb both imaging information for tracking of viewers and other scene analysis and ultrasonic waves to detect touch response by the users. As an example, such a system may project a holographic ball that is thrown into the audience in the presentation space. When the holographic ball is virtually “touched” by a viewer, the LF display system gives the viewer the illusion that the ball is in their hands, and may simultaneously project a tactile surface coincident with the surface of the ball to allow the viewer to feel the presence of the holographic ball. The display system components that perform energy sensing of the environment may be integrated into the display surface via bidirectional energy elements that both emit and absorb energy, or they may be dedicated sensors that are separate from the display surface. For example, the LF display system may include dedicated ultrasonic transducers and image capture devices separated from the display surface.

The LF display system can be part of a LF gaming application network. The LF gaming application network allows LF data to be recorded in one location (e.g., a first arena), encoded, transmitted to a different location (e.g., a second arena), decoded, and displayed as holographic content to viewers in a presentation space in the different location. This allows viewers in multiple locations to perceive a live-gaming application occurring in a different location. In some embodiments, the LF display system includes a network system that manages the digital rights of the holographic content.

Light Field Display System

FIG.1is a diagram100of a light field (LF) display module110presenting a holographic object120, in accordance with one or more embodiments. The LF display module110is part of a light field (LF) display system. The LF display system presents holographic content including at least one holographic object using one or more LF display modules. The LF display system can present holographic content to one or multiple viewers. In some embodiments, the LF display system may also augment the holographic content with other sensory content (e.g., touch, audio, smell, temperature, etc.). For example, as discussed below, the projection of focused ultrasonic sound waves may generate a mid-air tactile sensation that can simulate a surface of some or all of a holographic object. The LF display system includes one or more LF display modules110, and is discussed in detail below with regard toFIGS.2-4.

The LF display module110is a holographic display that presents holographic objects (e.g., the holographic object120) to one or more viewers (e.g., viewer140). The LF display module110includes an energy device layer (e.g., an emissive electronic display or acoustic projection device) and an energy waveguide layer (e.g., an optical lens array). Additionally, the LF display module110may include an energy relay layer for combining multiple energy sources or detectors together to form a single surface. At a high-level, the energy device layer generates energy (e.g., holographic content) that is then directed using the energy waveguide layer to a region in space in accordance with one or more four-dimensional (4D) light field functions. The LF display module110may also project and/or sense one or more types of energy simultaneously. For example, LF display module110may be able to project a holographic image as well as an ultrasonic tactile surface in a viewing volume, while simultaneously detecting imaging data from the viewing volume. The operation of the LF display module110is discussed in more detail below with regard toFIGS.2-3.

The LF display module110generates holographic objects within a holographic object volume160using one or more 4D light field functions (e.g., derived from a plenoptic function). The holographic objects can be three-dimensional (3D), two-dimensional (2D), or some combination thereof. Moreover, the holographic objects may be polychromatic (e.g., full color). The holographic objects may be projected in front of the screen plane, behind the screen plane, or split by the screen plane. A holographic object120can be presented such that it is perceived anywhere within the holographic object volume160. A holographic object within the holographic object volume160may appear to a viewer140to be floating in space.

A holographic object volume160represents a volume in which holographic objects may be perceived by a viewer140. The holographic object volume160can extend in front of the surface of the display area150(i.e., towards the viewer140) such that holographic objects can be presented in front of the plane of the display area150. Additionally, the holographic object volume160can extend behind the surface of the display area150(i.e., away from the viewer140), allowing for holographic objects to be presented as if they are behind the plane of the display area150. In other words, the holographic object volume160may include all the rays of light that originate (e.g., are projected) from a display area150and can converge to create a holographic object. Herein, light rays may converge at a point that is in front of the display surface, at the display surface, or behind the display surface. More simply, the holographic object volume160encompasses all of the volume from which a holographic object may be perceived by a viewer.

A viewing volume130is a volume of space from which holographic objects (e.g., holographic object120) presented within a holographic object volume160by the LF display system are fully viewable. The holographic objects may be presented within the holographic object volume160, and viewed within a viewing volume130, such that they are indistinguishable from actual objects. A holographic object is formed by projecting the same light rays that would be generated from the surface of the object were it physically present.

In some cases, the holographic object volume160and the corresponding viewing volume130may be relatively small—such that it is designed for a single viewer, as described in detail below with regard toFIGS.10and11. In other embodiments, the LF display modules may be enlarged and/or tiled to create larger holographic object volumes and corresponding viewing volumes that can accommodate a large range of viewers (e.g., 1 to thousands), as described in detail below with regard to, e.g.,FIGS.4A-4F,6A-6B,7A-7B,8, and9A-9B. The LF display modules presented in this disclosure may be built so that the full surface of the LF display contains holographic imaging optics, with no inactive or dead space, and without any need for bezels. In these embodiments, the LF display modules may be tiled so that the imaging area is continuous across the seam between LF display modules, and the connection points between the tiled modules is virtually undetectable using the visual acuity of the eye. Notably, in some configurations, some portion of the display surface may not include holographic imaging optics, although they are not described in detail herein.

The flexible size and/or shape of a viewing volume130allows for viewers to be unconstrained within the viewing volume130. For example, a viewer140can move to a different position within a viewing volume130and see a different view of the holographic object120from the corresponding perspective. To illustrate, referring toFIG.1, the viewer140is at a first position relative to the holographic object120such that the holographic object120appears to be a head-on view of a dolphin. The viewer140may move to other locations relative to the holographic object120to see different views of the dolphin. For example, the viewer140may move such that he/she sees a left side of the dolphin, a right side of the dolphin, etc., much like if the viewer140was looking at an actual dolphin and changed his/her relative position to the actual dolphin to see different views of the dolphin. In some embodiments, the holographic object120is visible to all viewers within the viewing volume130that have an unobstructed line (i.e., not blocked by an object/person) of sight to the holographic object120. These viewers may be unconstrained such that they can move around within the viewing volume to see different perspectives of the holographic object120. Accordingly, the LF display system may present holographic objects such that a plurality of unconstrained viewers may simultaneously see different perspectives of the holographic objects in real-world space as if the holographic objects were physically present.

In contrast, conventional displays (e.g., stereoscopic, virtual reality, augmented reality, or mixed reality) generally require each viewer to wear some sort of external device (e.g., 3-D glasses, a near-eye display, or a head-mounted display) to see content. Additionally and/or alternatively, conventional displays may require that a viewer be constrained to a particular viewing position (e.g., in a chair that has fixed location relative to the display). For example, when viewing an object shown by a stereoscopic display, a viewer always focuses on the display surface, rather than on the object, and the display will always present just two views of an object that will follow a viewer who attempts to move around that perceived object, causing distortions in the perception of that object. With a light field display, however, viewers of a holographic object presented by the LF display system do not need to wear an external device, nor be confined to a particular position, in order to see the holographic object. The LF display system presents the holographic object in a manner that is visible to viewers in much the same way a physical object would be visible to the viewers, with no requirement of special eyewear, glasses, or a head-mounted accessory. Further, the viewer may view holographic content from any location within a viewing volume.

Notably, potential locations for holographic objects within the holographic object volume160are limited by the size of the volume. To increase the size of the holographic object volume160, a size of a display area150of the LF display module110may be increased, and/or multiple LF display modules may be tiled together in a manner that forms a seamless display surface. The seamless display surface has an effective display area that is larger than the display areas of the individual LF display modules. Some embodiments relating to tiling LF display modules are discussed below with regard toFIGS.4A-4F,6A-6B,7A-7B,8, and9A-9B. As illustrated inFIG.1, the display area150is, for example, rectangular resulting in a holographic object volume160that is a pyramid. In other embodiments, the display area may have some other shape (e.g., hexagonal), which also affects the shape of the corresponding viewing volume.

Additionally, while the above discussion focuses on presenting the holographic object120within a portion of the holographic object volume160that is between the LF display module110and the viewer140, the LF display module110can additionally present content in the holographic object volume160behind the plane of the display area150. For example, the LF display module110may make the display area150appear to be a surface of the ocean that the holographic object120is jumping out of. And the displayed content may be such that the viewer140is able to look through the displayed surface to see marine life that is under the water. Moreover, the LF display system can generate content that seamlessly moves around the holographic object volume160, including behind and in front of the plane of the display area150.

FIG.2Aillustrates a cross section200of a portion of a LF display module210, in accordance with one or more embodiments. The LF display module210may be the LF display module110. In other embodiments, the LF display module210may be another LF display module with a different display area shape than display area150. In the illustrated embodiment, the LF display module210includes an energy device layer220, an energy relay layer230, and an energy waveguide layer240. Some embodiments of the LF display module210have different components than those described here. For example, in some embodiments, the LF display module210does not include the energy relay layer230. Similarly, the functions can be distributed among the components in a different manner than is described here.

The display system described here presents an emission of energy that replicates the energy normally surrounding an object in the real world. Here, emitted energy is directed towards a specific direction from every coordinate on the display surface. In other words, the various coordinates on the display surface act as projection locations for emitted energy. The directed energy from the display surface enables convergence of many rays of energy, which, thereby, can create holographic objects. For visible light, for example, the LF display will project a very large number of light rays from the projection locations that may converge at any point in the holographic object volume so they will appear to come from the surface of a real-world object located in this region of space from the perspective of a viewer that is located further away than the object being projected. In this way, the LF display is generating the rays of reflected light that would leave such an object's surface from the perspective of the viewer. The viewer perspective may change on any given holographic object, and the viewer will see a different view of that holographic object.

The energy device layer220includes one or more electronic displays (e.g., an emissive display such as an OLED) and one or more other energy projection and/or energy receiving devices as described herein. The one or more electronic displays are configured to display content in accordance with display instructions (e.g., from a controller of a LF display system). The one or more electronic displays include a plurality of pixels, each with an intensity that is individually controlled. Many types of commercial displays, such as emissive LED and OLED displays, may be used in the LF display.

The energy device layer220may also include one or more acoustic projection devices and/or one or more acoustic receiving devices. An acoustic projection device generates one or more pressure waves that complement the holographic object250. The generated pressure waves may be, e.g., audible, ultrasonic, or some combination thereof. An array of ultrasonic pressure waves may be used for volumetric tactile sensation (e.g., at a surface of the holographic object250). An audible pressure wave is used for providing audio content (e.g., immersive audio) that can complement the holographic object250. For example, assuming the holographic object250is a dolphin, one or more acoustic projection devices may be used to (1) generate a tactile surface that is collocated with a surface of the dolphin such that viewers may touch the holographic object250; and (2) provide audio content corresponding to noises a dolphin makes such as clicks, chirping, or chatter. An acoustic receiving device (e.g., a microphone or microphone array) may be configured to monitor ultrasonic and/or audible pressure waves within a local area of the LF display module210.

The energy device layer220may also include one or more imaging sensors. An imaging sensor may be sensitive to light in a visible optical band, and in some cases may be sensitive to light in other bands (e.g., infrared). The imaging sensor may be, e.g., a complementary metal oxide semi-conductor (CMOS) array, a charged coupled device (CCD), an array of photodetectors, some other sensor that captures light, or some combination thereof. The LF display system may use data captured by the one or more imaging sensor for position location tracking of viewers.

In some configurations, the energy relay layer230relays energy (e.g., electromagnetic energy, mechanical pressure waves, etc.) between the energy device layer220and the energy waveguide layer240. The energy relay layer230includes one or more energy relay elements260. Each energy relay element includes a first surface265and a second surface270, and it relays energy between the two surfaces. The first surface265of each energy relay element may be coupled to one or more energy devices (e.g., electronic display or acoustic projection device). An energy relay element may be composed of, e.g., glass, carbon, optical fiber, optical film, plastic, polymer, or some combination thereof. Additionally, in some embodiments, an energy relay element may adjust magnification (increase or decrease) of energy passing between the first surface265and the second surface270. If the relay offers magnification, then the relay may take the form of an array of bonded tapered relays, called tapers, where the area of one end of the taper may be substantially larger than the opposite end. The large end of the tapers can be bonded together to form a seamless energy surface275. One advantage is that space is created on the multiple small ends of each taper to accommodate the mechanical envelope of multiple energy sources, such as the bezels of multiple displays. This extra room allows the energy sources to be placed side-by-side on the small taper side, with each energy source having their active areas directing energy into the small taper surface and relayed to the large seamless energy surface. Another advantage to using tapered relays is that there is no non-imaging dead space on the combined seamless energy surface formed by the large end of the tapers. No border or bezel exists, and so the seamless energy surfaces can then be tiled together to form a larger surface with virtually no seams according to the visual acuity of the eye.

The second surfaces of adjacent energy relay elements come together to form an energy surface275. In some embodiments, a separation between edges of adjacent energy relay elements is less than a minimum perceptible contour as defined by a visual acuity of a human eye having, for example, 20/40 vision, such that the energy surface275is effectively seamless from the perspective of a viewer280within a viewing volume285.

In some embodiments, the second surfaces of adjacent energy relay elements are fused together with processing steps that may include one or more of pressure, heat, and a chemical reaction, in such a way no seam exists between them. And still in other embodiments, an array of energy relay elements is formed by molding one side of a continuous block of relay material into an array of small taper ends, each configured to transport energy from an energy device attached to the small tapered end into a single combined surface with a larger area which is never subdivided.

In some embodiments, one or more of the energy relay elements exhibit energy localization, where the energy transport efficiency in the longitudinal direction substantially normal to the surfaces265and270is much higher than the transport efficiency in the perpendicular transverse plane, and where the energy density is highly localized in this transverse plane as the energy wave propagates between surface265and surface270. This localization of energy allows an energy distribution, such as an image, to be efficiency relayed between these surfaces without any significant loss in resolution.

The energy waveguide layer240directs energy from a location (e.g., a coordinate) on the energy surface275into a specific energy propagation path outward from the display surface into the holographic viewing volume285using waveguide elements in the energy waveguide layer240. The energy propagation path is defined by two angular dimensions determined at least by the energy surface coordinate location relative to the waveguide. The waveguide is associated with a spatial 2D coordinate. Together, these four coordinates form a four-dimensional (4D) energy field. As an example, for electromagnetic energy, the waveguide elements in the energy waveguide layer240direct light from positions on the seamless energy surface275along different propagation directions through the viewing volume285. In various examples, the light is directed in accordance with a 4D light field function to form the holographic object250within the holographic object volume255.

Each waveguide element in the energy waveguide layer240may be, for example, a lenslet composed of one or more elements. In some configurations, the lenslet may be a positive lens. The positive lens may have a surface profile that is spherical, aspherical, or freeform. Additionally, in some embodiments, some or all of the waveguide elements may include one or more additional optical components. An additional optical component may be, e.g., an energy-inhibiting structure such as a baffle, a positive lens, a negative lens, a spherical lens, an aspherical lens, a freeform lens, a liquid crystal lens, a liquid lens, a refractive element, a diffractive element, or some combination thereof. In some embodiments, the lenslet and/or at least one of the additional optical components is able to dynamically adjust its optical power. For example, the lenslet may be a liquid crystal lens or a liquid lens. Dynamic adjustment of a surface profile the lenslet and/or at least one additional optical component may provide additional directional control of light projected from a waveguide element.

In the illustrated example, the holographic object volume255of the LF display has boundaries formed by light ray256and light ray257, but could be formed by other rays. The holographic object volume255is a continuous volume that extends both in front (i.e., towards the viewer280) of the energy waveguide layer240and behind it (i.e., away from the viewer280). In the illustrated example, ray256and ray257are projected from opposite edges of the LF display module210at the highest angle relative to the normal to the display surface277that may be perceived by a user, but these could be other projected rays. The rays define the field-of-view of the display, and, thus, define the boundaries for the holographic viewing volume285. In some cases, the rays define a holographic viewing volume where the full display can be observed without vignetting (e.g., an ideal viewing volume). As the field of view of the display increases, the convergence point of ray256and ray257will be closer to the display. Thus, a display having a larger field of view allows a viewer280to see the full display at a closer viewing distance. Additionally, ray256and257may form an ideal holographic object volume. Holographic objects presented in an ideal holographic object volume can be seen anywhere in the viewing volume285.

In some examples, holographic objects may be presented to only a portion of the viewing volume285. In other words, holographic object volumes may be divided into any number of viewing sub-volumes (e.g., viewing sub-volume290). Additionally, holographic objects can be projected outside of the holographic object volume255. For example, holographic object251is presented outside of holographic object volume255. Because the holographic object251is presented outside of the holographic object volume255it cannot be viewed from every location in the viewing volume285. For example, holographic object251may be visible from a location in viewing sub-volume290, but not visible from the location of the viewer280.

For example, we turn toFIG.2Bto illustrate viewing holographic content from different viewing sub-volumes.FIG.2Billustrates a cross section200of a portion of a LF display module, in accordance with one or more embodiments. The cross-section ofFIG.2Bis the same as the cross-section ofFIG.2A. However,FIG.2Billustrates a different set of light rays projected from the LF display module210. Ray256and ray257still form a holographic object volume255and a viewing volume285. However, as shown, rays projected from the top of the LF display module210and the bottom of the LF display module210overlap to form various viewing sub-volumes (e.g., view sub-volumes290A,290B,290C, and290D) within the viewing volume285. A viewer in the first viewing sub-volume (e.g.,290A) may be able to perceive holographic content presented in the holographic object volume255that viewers in the other viewing sub-volumes (e.g.,290B,290C, and290D) are unable to perceive.

More simply, as illustrated inFIG.2A, holographic object volume255is a volume in which holographic objects may be presented by LF display system such that they may be perceived by viewers (e.g., viewer280) in viewing volume285. In this way, the viewing volume285is an example of an ideal viewing volume, while the holographic object volume255is an example of an ideal object volume. However, in various configurations, viewers may perceive holographic objects presented by LF display system200in other example holographic object volumes. More generally, an “eye-line guideline” applies when viewing holographic content projected from an LF display module. The eye-line guideline asserts that the line formed by a viewer's eye position and a holographic object being viewed must intersect a LF display surface.

When viewing holographic content presented by the LF display module210, each eye of the viewer280sees a different perspective of the holographic object250because the holographic content is presented according to a 4D light field function. Moreover, as the viewer280moves within the viewing volume285he/she would also see different perspectives of the holographic object250as would other viewers within the viewing volume285. As will be appreciated by one of ordinary skill in the art, a 4D light field function is well known in the art and will not be elaborated further herein.

As described in more detail herein, in some embodiments, the LF display can project more than one type of energy. For example, the LF display may project two types of energy, such as, for example, mechanical energy and electromagnetic energy. In this configuration, energy relay layer230may include two separate energy relays which are interleaved together at the energy surface275, but are separated such that the energy is relayed to two different energy device layers220. Here, one relay may be configured to transport electromagnetic energy, while another relay may be configured to transport mechanical energy. In some embodiments, the mechanical energy may be projected from locations between the electromagnetic waveguide elements on the energy waveguide layer240, helping form structures that inhibit light from being transported from one electromagnetic waveguide element to another. In some embodiments, the energy waveguide layer240may also include waveguide elements that transport focused ultrasound along specific propagation paths in accordance with display instructions from a controller.

Note that in alternate embodiments (not shown), the LF display module210does not include the energy relay layer230. In this case, the energy surface275is an emission surface formed using one or more adjacent electronic displays within the energy device layer220. And in some embodiments, with no energy relay layer, a separation between edges of adjacent electronic displays is less than a minimum perceptible contour as defined by a visual acuity of a human eye having 20/40 vision, such that the energy surface is effectively seamless from the perspective of the viewer280within the viewing volume285.

LF Display Modules

FIG.3Ais a perspective view of a LF display module300A, in accordance with one or more embodiments. The LF display module300A may be the LF display module110and/or the LF display module210. In other embodiments, the LF display module300A may be some other LF display module. In the illustrated embodiment, the LF display module300A includes an energy device layer310, and energy relay layer320, and an energy waveguide layer330. The LF display module300A is configured to present holographic content from a display surface365as described herein. For convenience, the display surface365is illustrated as a dashed outline on the frame390of the LF display module300A, but is, more accurately, the surface directly in front of waveguide elements bounded by the inner rim of the frame390. The display surface365includes a plurality of projection locations from which energy can be projected. Some embodiments of the LF display module300A have different components than those described here. For example, in some embodiments, the LF display module300A does not include the energy relay layer320. Similarly, the functions can be distributed among the components in a different manner than is described here.

The energy device layer310is an embodiment of the energy device layer220. The energy device layer310includes four energy devices340(three are visible in the figure). The energy devices340may all be the same type (e.g., all electronic displays), or may include one or more different types (e.g., includes electronic displays and at least one acoustic energy device).

The energy relay layer320is an embodiment of the energy relay layer230. The energy relay layer320includes four energy relay devices350(three are visible in the figure). The energy relay devices350may all relay the same type of energy (e.g., light), or may relay one or more different types (e.g., light and sound). Each of the relay devices350includes a first surface and a second surface, the second surface of the energy relay devices350being arranged to form a singular seamless energy surface360. In the illustrated embodiment, each of the energy relay devices350are tapered such that the first surface has a smaller surface area than the second surface, which allows accommodation for the mechanical envelopes of the energy devices340on the small end of the tapers. This also allows the seamless energy surface to be borderless, since the entire area can project energy. This means that this seamless energy surface can be tiled by placing multiple instances of LF display module300A together, without dead space or bezels, so that the entire combined surface is seamless. In other embodiments, the first surface and the second surface have the same surface area.

The energy waveguide layer330is an embodiment of the energy waveguide layer240. The energy waveguide layer330includes a plurality of waveguide elements370. As discussed above with respect toFIG.2, the energy waveguide layer330is configured to direct energy from the seamless energy surface360along specific propagation paths in accordance with a 4D light field function to form a holographic object. Note that in the illustrated embodiment the energy waveguide layer330is bounded by a frame390. In other embodiments, there is no frame390and/or a thickness of the frame390is reduced. Removal or reduction of thickness of the frame390can facilitate tiling the LF display module300A with additional LF display modules.

Note that in the illustrated embodiment, the seamless energy surface360and the energy waveguide layer330are planar. In alternate embodiments, not shown, the seamless energy surface360and the energy waveguide layer330may be curved in one or more dimensions.

The LF display module300A can be configured with additional energy sources that reside on the surface of the seamless energy surface, and allow the projection of an energy field in additional to the light field. In one embodiment, an acoustic energy field may be projected from electrostatic speakers (not illustrated) mounted at any number of locations on the seamless energy surface360. Further, the electrostatic speakers of the LF display module300A are positioned within the light field display module300A such that the dual-energy surface simultaneously projects sound fields and holographic content. For example, the electrostatic speakers may be formed with one or more diaphragm elements that are transmissive to some wavelengths of electromagnetic energy, and driven with one or more conductive elements (e.g., planes which sandwich the one or more diaphragm elements). The electrostatic speakers may be mounted on to the seamless energy surface360, so that the diaphragm elements cover some of the waveguide elements. The conductive electrodes of the speakers may be co-located with structures designed to inhibit light transmission between electromagnetic waveguides, and/or located at positions between electromagnetic waveguide elements (e.g., frame390). In various configurations, the speakers can project an audible sound and/or many sources of focused ultrasonic energy that produces a haptic surface.

In some configurations an energy device340may sense energy. For example, an energy device may be a microphone, a light sensor, an acoustic transducer, etc. As such, the energy relay devices may also relay energy from the seamless energy surface360to the energy device layer310. That is, the seamless energy surface360of the LF display module forms a bidirectional energy surface when the energy devices and energy relay devices340are configured to simultaneously emit and sense energy (e.g., emit light fields and sense sound).

More broadly, an energy device340of a LF display module340can be either an energy source or an energy sensor. The LF display module300A can include various types of energy devices that act as energy sources and/or energy sensors to facilitate the projection of high quality holographic content to a user. Other sources and/or sensors may include thermal sensors or sources, infrared sensors or sources, image sensors or sources, mechanical energy transducers that generate acoustic energy, feedback sources, etc. Many other sensors or sources are possible. Further, the LF display modules can be tiled such that the LF display module can form an assembly that projects and senses multiple types of energy from a large aggregate seamless energy surface

In various embodiments of LF display module300A, the seamless energy surface360can have various surface portions where each surface portion is configured to project and/or emit specific types of energy. For example, when the seamless energy surface is a dual-energy surface, the seamless energy surface360includes one or more surface portions that project electromagnetic energy, and one or more other surface portions that project ultrasonic energy. The surface portions that project ultrasonic energy may be located on the seamless energy surface360between electromagnetic waveguide elements, and/or co-located with structures designed to inhibit light transmission between electromagnetic waveguide elements. In an example where the seamless energy surface is a bidirectional energy surface, the energy relay layer320may include two types of energy relay devices interleaved at the seamless energy surface360. In various embodiments, the seamless energy surface360may be configured such that portions of the surface under any particular waveguide element370are all energy sources, all energy sensors, or a mix of energy sources and energy sensors.

FIG.3Bis a cross-sectional view of a LF display module300B which includes interleaved energy relay devices, in accordance with one or more embodiments. Energy relay device350A transports energy between the energy relay first surface345A connected to energy device340A, and the seamless energy surface360. Energy relay350B transports energy between the energy relay first surface345B connected to energy device340B, and the seamless energy surface360. Both relay devices are interleaved at interleaved energy relay device352, which is connected to the seamless energy surface360. In this configuration, surface360contains interleaved energy locations of both energy devices340A and340B, which may be energy sources or energy sensors. Accordingly, the LF display module300B may be configured as either a dual energy projection device for projecting more than one type of energy, or as a bidirectional energy device for simultaneously projecting one type of energy and sensing another type of energy. The LF display module300B may be the LF display module110and/or the LF display module210. In other embodiments, the LF display module300B may be some other LF display module.

The LF display module300B includes many components similarly configured to those of LF display module300A inFIG.3A. For example, in the illustrated embodiment, the LF display module300B includes an energy device layer310, energy relay layer320, a seamless energy surface360, and an energy waveguide layer330including at least the same functionality of those described in regard toFIG.3A. Additionally, the LF display module300B may present and/or receive energy from the display surface365. Notably, the components of the LF display module300B are alternatively connected and/or oriented than those of the LF display module300A inFIG.3A. Some embodiments of the LF display module300B have different components than those described here. Similarly, the functions can be distributed among the components in a different manner than is described here.FIG.3Billustrates the design of a single LF display module300B that may be tiled to produce a dual energy projection surface or a bidirectional energy surface with a larger area.

In an embodiment, the LF display module300B is a LF display module of a bidirectional LF display system. A bidirectional LF display system may simultaneously project energy and sense energy from the display surface365. The seamless energy surface360contains both energy projecting and energy sensing locations that are closely interleaved on the seamless energy surface360. Therefore, in the example ofFIG.3B, the energy relay layer320is configured in a different manner than the energy relay layer ofFIG.3A. For convenience, the energy relay layer of LF display module300B will be referred to herein as the “interleaved energy relay layer.”

The interleaved energy relay layer320includes two legs: a first energy relay device350A and a second energy relay device350B. Each of the legs are illustrated as a lightly shaded area inFIG.3B. Each of the legs may be made of a flexible relay material, and formed with a sufficient length to use with energy devices of various sizes and shapes. In some regions of the interleaved energy relay layer, the two legs are tightly interleaved together as they approach the seamless energy surface360. In the illustrated example, the interleaved energy relay devices352are illustrated as a darkly shaded area.

While interleaved at the seamless energy surface360, the energy relay devices are configured to relay energy to/from different energy devices. The energy devices are at energy device layer310. As illustrated, energy device340A is connected to energy relay device350A and energy device340B is connected to energy relay device350B. In various embodiments, each energy device may be an energy source or energy sensor.

An energy waveguide layer330includes waveguide elements370to steer energy waves from the seamless energy surface360along projected paths towards a series of convergence points. In this example, a holographic object380is formed at the series of convergence points. Notably, as illustrated, the convergence of energy at the holographic object380occurs on the viewer side (i.e., the front side), of the display surface365. However, in other examples, the convergence of energy may be anywhere in the holographic object volume, which extends both in front of the display surface365and behind the display surface365. The waveguide elements370can simultaneously steer incoming energy to an energy device (e.g., an energy sensor), as described below.

In one example embodiment of LF display module300B, an emissive display is used as an energy source (e.g., energy device340A) and an imaging sensor is used as an energy sensor (e.g., energy device340B). In this manner, the LF display module300B can simultaneously project holographic content and detect light from the volume in front of the display surface365. In this manner, this embodiment of the LF display module300B functions as both a LF display and an LF sensor.

In an embodiment, the LF display module300B is configured to simultaneously project a light field from projection locations on the display surface to the front of the display surface and capture a light field from front of the display surface at the projection locations. In this embodiment, the energy relay device350A connects a first set of locations at the seamless energy surface360positioned under the waveguide elements370to an energy device340A. In an example, energy device340A is an emissive display having an array of source pixels. The energy relay device340B connects a second set of locations at the seamless energy surface360positioned under waveguide elements370to an energy device340B. In an example, the energy device340B is an imaging sensor having an array of sensor pixels. The LF display module300B may be configured such that the locations at the seamless energy surface365that are under a particular waveguide element370are all emissive display locations, all imaging sensor locations, or some combination of these locations. In other embodiments, the bidirectional energy surface can project and receive various other forms of energy.

In another example embodiment of the LF display module300B, the LF display module is configured to project two different types of energy. For example, in an embodiment, energy device340A is an emissive display configured to emit electromagnetic energy and energy device340B is an ultrasonic transducer configured to emit mechanical energy. As such, both light and sound can be projected from various locations at the seamless energy surface360. In this configuration, energy relay device350A connects the energy device340A to the seamless energy surface360and relays the electromagnetic energy. The energy relay device is configured to have properties (e.g. varying refractive index) which make it efficient for transporting electromagnetic energy. Energy relay device350B connects the energy device340B to the seamless energy surface360and relays mechanical energy. Energy relay device350B is configured to have properties for efficient transport of ultrasound energy (e.g. distribution of materials with different acoustic impedance). In some embodiments, the mechanical energy may be projected from locations between the waveguide elements370on the energy waveguide layer330. The locations that project mechanical energy may form structures that serve to inhibit light from being transported from one electromagnetic waveguide element to another. In one example, a spatially separated array of locations that project ultrasonic mechanical energy can be configured to create three-dimensional haptic shapes and surfaces in mid-air. The surfaces may coincide with projected holographic objects (e.g., holographic object380). In some examples, phase delays and amplitude variations across the array can assist in creating the haptic shapes.

In various embodiments, the LF display module300B with interleaved energy relay devices may include multiple energy device layers with each energy device layer including a specific type of energy device. In these examples, the energy relay layers are configured to relay the appropriate type of energy between the seamless energy surface360and the energy device layer310.

Tiled LF Display Modules

FIG.4Ais a perspective view of a portion of LF display system400that is tiled in two dimensions to form a single-sided seamless surface environment, in accordance with one or more embodiments. The LF display system400includes a plurality of LF display modules that are tiled to form an array410. More explicitly, each of the small squares in the array410represents a tiled LF display module412. The LF display module412may be the same as LF display module300A or300B. The array410may cover, for example, some or all of a surface (e.g., a wall) of a room. The LF array may cover other surfaces, such as, for example, a table top, a billboard, a rotunda, etc.

The array410may project one or more holographic objects. For example, in the illustrated embodiment, the array410projects a holographic object420and a holographic object422. Tiling of the LF display modules412allows for a much larger viewing volume as well as allows for objects to be projected out farther distances from the array410. For example, in the illustrated embodiment, the viewing volume is, approximately, the entire area in front of and behind the array410rather than a localized volume in front of (and behind) a LF display module412.

In some embodiments, the LF display system400presents the holographic object420to a viewer430and a viewer434. The viewer430and the viewer434receive different perspectives of the holographic object420. For example, the viewer430is presented with a direct view of the holographic object420, whereas the viewer434is presented with a more oblique view of the holographic object420. As the viewer430and/or the viewer434move, they are presented with different perspectives of the holographic object420. This allows a viewer to visually interact with a holographic object by moving relative to the holographic object. For example, as the viewer430walks around a holographic object420, the viewer430sees different sides of the holographic object420as long as the holographic object420remains in the holographic object volume of the array410. Accordingly, the viewer430and the viewer434may simultaneously see the holographic object420in real-world space as if it is truly there. Additionally, the viewer430and the viewer434do not need to wear an external device in order to see the holographic object420, as the holographic object420is visible to viewers in much the same way a physical object would be visible. Additionally, here, the holographic object422is illustrated behind the array because the viewing volume of the array extends behind the surface of the array. In this manner, the holographic object422may be presented to the viewer430and/or viewer434.

In some embodiments, the LF display system400may include a tracking system that tracks positions of the viewer430and the viewer434. In some embodiments, the tracked position is the position of a viewer. In other embodiments, the tracked position is that of the eyes of a viewer. The position tracking of the eye is different from gaze tracking which tracks where an eye is looking (e.g., uses orientation to determine gaze location). The eyes of the viewer430and the eyes of the viewer434are in different locations.

In various configurations, the LF display system400may include one or more tracking systems. For example, in the illustrated embodiment ofFIG.4A, LF display system includes a tracking system440that is external to the array410. Here, the tracking system may be a camera system coupled to the array410. External tracking systems are described in more detail in regard toFIG.5A. In other example embodiments, the tracking system may be incorporated into the array410as described herein. For example, an energy device (e.g., energy device340) of one or more LF display modules412containing a bidirectional energy surface included in the array410may be configured to capture images of viewers in front of the array410. In whichever case, the tracking system(s) of the LF display system400determines tracking information about the viewers (e.g., viewer430and/or viewer434) viewing holographic content presented by the array410.

Tracking information describes a position in space (e.g., relative to the tracking system) for the position of a viewer, or a position of a portion of a viewer (e.g. one or both eyes of a viewer, or the extremities of a viewer). A tracking system may use any number of depth determination techniques to determine tracking information. The depth determination techniques may include, e.g., structured light, time of flight, stereo imaging, some other depth determination technique, or some combination thereof. The tracking system may include various systems configured to determine tracking information. For example, the tracking system may include one or more infrared sources (e.g., structured light sources), one or more imaging sensors that can capture images in the infrared (e.g., red-blue-green-infrared camera), and a processor executing tracking algorithms. The tracking system may use the depth estimation techniques to determine positions of viewers. In some embodiments, the LF display system400generates holographic objects based on tracked positions, motions, or gestures of the viewer430and/or the viewer434as described herein. For example, the LF display system400may generate a holographic object responsive to a viewer coming within a threshold distance of the array410and/or a particular position.

The LF display system400may present one or more holographic objects that are customized to each viewer based in part on the tracking information. For example, the viewer430may be presented with the holographic object420, but not the holographic object422. Similarly, the viewer434may be presented with the holographic object422, but not the holographic object420. For example, the LF display system400tracks a position of each of the viewer430and the viewer434. The LF display system400determines a perspective of a holographic object that should be visible to a viewer based on their position relative to where the holographic object is to be presented. The LF display system400selectively projects light from specific pixels that correspond to the determined perspective. Accordingly, the viewer434and the viewer430can simultaneously have experiences that are, potentially, completely different. In other words, the LF display system400may present holographic content to viewing sub-volumes of the viewing volume (i.e., similar to the viewing sub-volumes290A,290B,290C, and290D shown inFIG.2B). For example, as illustrated, because the LF display system400can track the position of the viewer430, the LF display system400may present space content (e.g., holographic object420) to a viewing sub-volume surrounding the viewer430and safari content (e.g., holographic object422) to a viewing sub-volume surrounding the viewer434. In contrast, conventional systems would have to use individual headsets to provide a similar experience.

In some embodiments the LF display system400may include one or more sensory feedback systems. The sensory feedback systems provide other sensory stimuli (e.g., tactile, audio, or smell) that augment the holographic objects420and422. For example, in the illustrated embodiment ofFIG.4A, the LF display system400includes a sensory feedback system442external to the array410. In one example, the sensory feedback system442may be an electrostatic speaker coupled to the array410. External sensory feedback systems are described in more detail in regard toFIG.5A. In other example embodiments, the sensory feedback system may be incorporated into the array410as described herein. For example, an energy device (e.g., energy device340A inFIG.3B) of a LF display module412included in the array410may be configured to project ultrasonic energy to viewers in front of the array and/or receive imaging information from viewers in front of the array. In whichever case, the sensory feedback system presents and/or receives sensory content to/from the viewers (e.g., viewer430and/or viewer434) viewing holographic content (e.g., holographic object420and/or holographic objected422) presented by the array410.

The LF display system400may include a sensory feedback system442that includes one or more acoustic projection devices external to the array. Alternatively or additionally, the LF display system400may include one or more acoustic projection devices integrated into the array410as described herein. The acoustic projection devices may consist of an array of ultrasonic sources configured to project a volumetric tactile surface. In some embodiments, the tactile surface may be coincident with a holographic object (e.g., at a surface of the holographic object420) for one or more surfaces of a holographic object if a portion of a viewer gets within a threshold distance of the one or more surfaces. The volumetric tactile sensation may allow the user to touch and feel surfaces of the holographic object. The plurality of acoustic projection devices may also project an audible pressure wave that provides audio content (e.g., immersive audio) to viewers. Accordingly, the ultrasonic pressure waves and/or the audible pressure waves can act to complement a holographic object.

In various embodiments, the LF display system400may provide other sensory stimuli based in part on a tracked position of a viewer. For example, the holographic object422illustrated inFIG.4Ais a lion, and the LF display system400may have the holographic object422roar both visually (i.e., the holographic object422appears to roar) and audibly (i.e., one or more acoustic projection devices project a pressure wave that the viewer430perceives as a lion's roar emanating from the holographic object422.

Note that, in the illustrated configuration, the holographic viewing volume may be limited in a manner similar to the viewing volume285of the LF display system200inFIG.2. This can limit the amount of perceived immersion that a viewer will experience with a single wall display unit. One way to address this is to use multiple LF display modules that are tiled along multiple sides as described below with respect toFIG.4B-4F.

FIG.4Bis a perspective view of a portion of a LF display system402in a multi-sided seamless surface environment, in accordance with one or more embodiments. The LF display system402is substantially similar to the LF display system400except that the plurality of LF display modules are tiled to create a multi-sided seamless surface environment. More specifically, the LF display modules are tiled to form an array that is a six-sided aggregated seamless surface environment. InFIG.4B, the plurality of LF display modules cover all the walls, the ceiling, and the floor of a room. In other embodiments, the plurality of LF display modules may cover some, but not all of a wall, a floor, a ceiling, or some combination thereof. In other embodiments, a plurality of LF display modules are tiled to form some other aggregated seamless surface. For example, the walls may be curved such that a cylindrical aggregated energy environment is formed. Moreover, as described below with regard toFIGS.6-9, in some embodiments, the LF display modules may be tiled to form a surface in a presentation space (e.g., walls, etc.).

The LF display system402may project one or more holographic objects. For example, in the illustrated embodiment the LF display system402projects the holographic object420into an area enclosed by the six-sided aggregated seamless surface environment. In this example, the viewing volume of the LF display system is also contained within the six-sided aggregated seamless surface environment. Note that, in the illustrated configuration, the viewer434may be positioned between the holographic object420and a LF display module414that is projecting energy (e.g., light and/or pressure waves) that is used to form the holographic object420. Accordingly, the positioning of the viewer434may prevent the viewer430from perceiving the holographic object420formed from energy from the LF display module414. However, in the illustrated configuration there is at least one other LF display module, e.g., a LF display module416, that is unobstructed (e.g., by the viewer434) and can project energy to form the holographic object420and be observed by viewer430. In this manner, occlusion by viewers in the space can cause some portion of the holographic projections to disappear, but the effect is much less than if only one side of the volume was populated with holographic display panels. Holographic object422is illustrated “outside” the walls of the six-sided aggregated seamless surface environment because the holographic object volume extends behind the aggregated surface. Thus, the viewer430and/or the viewer434can perceive the holographic object422as “outside” of the enclosed six-sided environment which they can move throughout.

As described above in reference toFIG.4A, in some embodiments, the LF display system402actively tracks positions of viewers and may dynamically instruct different LF display modules to present holographic content based on the tracked positions. Accordingly, a multi-sided configuration can provide a more robust environment (e.g., relative toFIG.4A) for providing holographic objects where unconstrained viewers are free to move throughout the area enclosed by the multi-sided seamless surface environment.

Notably, various LF display systems may have different configurations. Further, each configuration may have a particular orientation of surfaces that, in aggregate, form a seamless display surface (“aggregate surface”). That is, the LF display modules of a LF display system can be tiled to form a variety of aggregate surfaces. For example, inFIG.4B, the LF display system402includes LF display modules tiled to form a six-sided aggregate surface that approximates the walls of a room. In some other examples, an aggregate surface may only occur on a portion of a surface (e.g., half of a wall) rather than a whole surface (e.g., an entire wall). Some examples are described herein.

In some configurations, the aggregate surface of a LF display system may include an aggregate surface configured to project energy towards a localized viewing volume. Projecting energy to a localized viewing volume allows for a higher quality viewing experience by, for example, increasing the density of projected energy in a specific viewing volume, increasing the FOV for the viewers in that volume, and bringing the viewing volume closer to the display surface.

For example,FIG.4Cillustrates top down view of a LF display system450A with an aggregate surface in a “winged” configuration. In this example, the LF display system450A is located in a room with a front wall452, a rear wall454, a first sidewall456, a second sidewall458, a ceiling (not shown), and a floor (not shown). The first sidewall456, the second sidewall458, the rear wall454, floor, and the ceiling are all orthogonal. The LF display system450A includes LF display modules tiled to form an aggregate surface460covering the front wall. The front wall452, and thus the aggregate surface460, includes three portions: (i) a first portion462approximately parallel with the rear wall454(i.e., a central surface), (ii) a second portion464connecting the first portion462to the first sidewall456and placed at an angle to project energy towards the center of the room (i.e., a first side surface), and (iii) a third portion466connecting the first portion462to the second sidewall458and placed at an angle to project energy towards the center of the room (i.e., a second side surface). The first portion is a vertical plane in the room and has a horizontal and a vertical axis. The second and third portions are angled towards the center of the room along the horizontal axis.

In this example, the viewing volume468A of the LF display system450A is in the center of the room and partially surrounded by the three portions of the aggregate surface460. An aggregate surface that at least partially surrounds a viewer (“surrounding surface”) increases the immersive experience of the viewers.

To illustrate, consider, for example, an aggregate surface with only a central surface. Referring toFIG.2A, the rays that are projected from either end of the display surface create an ideal holographic volume and ideal viewing volumes as described above. Now consider, for example, if the central surface included two side surfaces angled towards the viewer. In this case, ray256and ray257would be projected at a greater angle from a normal of the central surface. Thus, the field of view of the viewing volume would increase. Similarly, the holographic viewing volume would be nearer the display surface. Additionally, because the two second and third portions tilted nearer the viewing volume, the holographic objects that are projected at a fixed distance from the display surface are closer to that viewing volume.

To simplify, a display surface with only a central surface has a planar field of view, a planar threshold separation between the (central) display surface and the viewing volume, and a planar proximity between a holographic object and the viewing volume. Adding one or more side surfaces angled towards the viewer increases the field of view relative to the planar field of view, decreases the separation between the display surface and the viewing volume relative to the planar separation, and increases the proximity between the display surface and a holographic object relative to the planar proximity. Further angling the side surfaces towards the viewer further increases the field of view, decreases the separation, and increases the proximity. In other words, the angled placement of the side surfaces increases the immersive experience for viewers.

Additionally, as described below in regards toFIG.6, deflection optics may be used to optimize the size and position of the viewing volume for LF display parameters (e.g., dimensions and FOV).

Returning toFIG.4D, in a similar example,FIG.4Dillustrates a side view of a LF display system450B with an aggregate surface in a “sloped” configuration. In this example, the LF display system450B is located in a room with a front wall452, a rear wall454, a first sidewall (not shown), a second sidewall (not shown), a ceiling472, and a floor474. The first sidewall, the second sidewall, the rear wall454, floor474, and the ceiling472are all orthogonal. The LF display system450B includes LF display modules tiled to form an aggregate surface460covering the front wall. The front wall452, and thus the aggregate surface460, includes three portions: (i) a first portion462approximately parallel with the rear wall454(i.e., a central surface), (ii) a second portion464connecting the first portion462to the ceiling472and angled to project energy towards the center of the room (i.e., a first side surface), and (iii) a third portion464connecting the first portion462to the floor474and angled to project energy towards the center of the room (i.e., a second side surface). The first portion is a vertical plane in the room and has a horizontal and a vertical axis. The second and third portions are angled towards the center of the room along the vertical axis.

In this example, the viewing volume468B of the LF display system450B is in the center of the room and partially surrounded by the three portions of the aggregate surface460. Similar to the configuration shown inFIG.4C, the two side portions (e.g., second portion464, and third portion466) are angled to surround the viewer and form a surrounding surface. The surrounding surface increases the viewing FOV from the perspective of any viewer in the holographic viewing volume468B. Additionally, the surrounding surface allows the viewing volume468B to be closer to the surface of the displays such that projected objects appear closer. In other words, the angled placement of the side surfaces increases the field of view, decreases the separation, and increases the proximity of the aggregate surface, thereby increasing the immersive experience for viewers. Further, as will be discussed below, deflection optics may be used to optimize the size and position of the viewing volume468B.

The sloped configuration of the side portions of the aggregate surface460enables holographic content to be presented closer to the viewing volume468B than if the third portion466was not sloped. For example, the lower extremities (e.g., legs) of a character presented form a LF display system in a sloped configuration may seem closer and more realistic than if a LF display system with a flat front wall were used.

Additionally, the configuration of the LF display system and the environment which it is located may inform the shape and locations of the viewing volumes and viewing sub-volumes.

FIG.4E, for example, illustrates a top down view of a LF display system450C with an aggregate surface460on a front wall452of a room. In this example, the LF display system450D is located in a room with a front wall452, a rear wall454, a first sidewall456, a second sidewall458, a ceiling (not shown), and a floor (not shown).

LF display system450C projects various rays from the aggregate surface460. From each position on the display surface, light rays are projected in an angular range that is centered on the viewing volume. The rays projected from the left side of the aggregate surface460have horizontal angular range481, rays projected from the right side of the aggregate surface have horizontal angular range482, and rays projected from the center of the aggregate surface460have horizontal angular range483. In between these points, the projected rays may take on intermediate values of angle ranges as described below in regard toFIG.6. Having a gradient deflection angle in the projected rays across the display surface in this manner creates a viewing volume468C. Further, this configuration avoids wasting resolution of the display on projecting rays into the side walls456and458.

FIG.4Fillustrates a side view of a LF display system450D with an aggregate surface460on a front wall452of a room. In this example, the LF display system450E is located in a room with a front wall452, a rear wall454, a first sidewall (not shown), a second sidewall (not shown), a ceiling472, and a floor474. In this example, the floor is tiered such that each tier rises in steps moving from the front wall to the back wall. Here, each tier of the floor includes a viewing sub-volume (e.g., viewing sub volume470A and470B). A tiered floor allows for viewing sub-volumes that do not overlap. That is, each viewing sub-volume has a line of sight from the viewing sub-volume to the aggregate surface460that does not pass through another viewing sub-volume. In other words, this orientation produces a “stadium seating” effect in which the vertical offset between tiers allows an unobstructed line of sight which allows each tier to “see over” the viewing sub-volumes of other tiers. LF display systems including viewing sub-volumes that do not overlap may provide a higher quality viewing experience than LF display systems that have viewing volumes that do overlap. For example, in the configuration shown inFIG.4F, different holographic content may be projected to the audiences in viewing sub-volumes470A and470B.

Control of a LF Display System

FIG.5Ais a block diagram of a LF display system500, in accordance with one or more embodiments. The LF display system500comprises a LF display assembly510and a controller520. The LF display assembly510includes one or more LF display modules512which project a light field. A LF display module512may include a source/sensor system514that includes an integrated energy source(s) and/or energy sensor(s) which project and/or sense other types of energy. The controller520includes a datastore522, a network interface524, and a LF processing engine530. The controller520may also include a tracking module526, and a viewer profiling module528. In some embodiments, the LF display system500also includes a sensory feedback system570and a tracking system580. The LF display systems described in the context ofFIGS.1,2,3, and4are embodiments of the LF display system500. In other embodiments, the LF display system500comprises additional or fewer modules than those described herein. Similarly, the functions can be distributed among the modules and/or different entities in a different manner than is described here. Applications of the LF display system500are also discussed in detail below with regard toFIGS.6-9.

The LF display assembly510provides holographic content in a holographic object volume that may be visible to viewers located within a viewing volume. The LF display assembly510may provide holographic content by executing display instructions received from the controller520. The holographic content may include one or more holographic objects that are projected in front of an aggregate surface the LF display assembly510, behind the aggregate surface of the LF display assembly510, or some combination thereof. Generating display instructions with the controller520is described in more detail below.

The LF display assembly510provides holographic content using one or more LF display modules (e.g., any of the LF display module110, the LF display system200, and LF display module300) included in an LF display assembly510. For convenience, the one or more LF display modules may be described herein as LF display module512. The LF display module512can be tiled to form a LF display assembly510. The LF display modules512may be structured as various seamless surface environments (e.g., single sided, multi-sided, a wall of a presentation space, a curved surface, etc.). That is, the tiled LF display modules form an aggregate surface. As previously described, a LF display module512includes an energy device layer (e.g., energy device layer220) and an energy waveguide layer (e.g., energy waveguide layer240) that present holographic content. The LF display module512may also include an energy relay layer (e.g., energy relay layer230) that transfers energy between the energy device layer and the energy waveguide layer when presenting holographic content.

The LF display module512may also include other integrated systems configured for energy projection and/or energy sensing as previously described. For example, a light field display module512may include any number of energy devices (e.g., energy device340) configured to project and/or sense energy. For convenience, the integrated energy projection systems and integrated energy sensing systems of the LF display module512may be described herein, in aggregate, as the source/sensor system514. The source/sensor system514is integrated within the LF display module512, such that the source/sensor system514shares the same seamless energy surface with LF display module512. In other words, the aggregate surface of an LF display assembly510includes the functionality of both the LF display module512and the source/sensor module514. That is, an LF assembly510including a LF display module512with a source/sensor system514may project energy and/or sense energy while simultaneously projecting a light field. For example, the LF display assembly510may include a LF display module512and source/sensor system514configured as a dual-energy surface or bidirectional energy surface as previously described.

In some embodiments, the LF display system500augments the generated holographic content with other sensory content (e.g., coordinated touch, audio, or smell) using a sensory feedback system570. The sensory feedback system570may augment the projection of holographic content by executing display instructions received from the controller520. Generally, the sensory feedback system570includes any number of sensory feedback devices external to the LF display assembly510(e.g., sensory feedback system442). Some example sensory feedback devices may include coordinated acoustic projecting and receiving devices, aroma projecting devices, temperature adjustment devices, force actuation devices, pressure sensors, transducers, etc. In some cases, the sensory feedback system570may have similar functionality to the light field display assembly510and vice versa. For example, both a sensory feedback system570and a light field display assembly510may be configured to generate a sound field. As another example, the sensory feedback system570may be configured to generate haptic surfaces while the light field display510assembly is not.

To illustrate, in an example embodiment of a light field display system500, a sensory feedback system570may include one or more acoustic projection devices. The one or more acoustic projection devices are configured to generate one or more pressure waves that complement the holographic content when executing display instructions received from the controller520. The generated pressure waves may be, e.g., audible (for sound), ultrasonic (for touch), or some combination thereof. Similarly, the sensory feedback system570may include an aroma projecting device. The aroma projecting device may be configured to provide scents to some, or all, of the target area when executing display instructions received from the controller. The aroma devices may be tied into an air circulation system (e.g., ducting, fans, or vents) to coordinate air flow within the target area. Further, the sensory feedback system570may include a temperature adjustment device. The temperature adjustment device is configured to increase or decrease temperature in some, or all, of the target area when executing display instructions received from the controller520.

In some embodiments, the sensory feedback system570is configured to receive input from viewers of the LF display system500. In this case, the sensory feedback system570includes various sensory feedback devices for receiving input from viewers. The sensor feedback devices may include devices such as acoustic receiving devices (e.g., a microphone), pressure sensors, joysticks, motion detectors, transducers, etc. The sensory feedback system may transmit the detected input to the controller520to coordinate generating holographic content and/or sensory feedback.

To illustrate, in an example embodiment of a light field display assembly, a sensory feedback system570includes a microphone. The microphone is configured to record audio produced by one or more viewers (e.g., verbal commands, dialogue, gasps, screams, laughter, etc.). The sensory feedback system570provides the recorded audio to the controller520as viewer input. The controller520may use the viewer input to generate holographic content. Similarly, the sensory feedback system570may include a pressure sensor. The pressure sensor is configured to measure forces applied by viewers to the pressure sensor. The sensory feedback system570may provide the measured forces to the controller520as viewer input.

In some embodiments, the LF display system500includes a tracking system580. The tracking system580includes any number of tracking devices configured to determine the position, movement and/or characteristics of viewers in the target area. Generally, the tracking devices are external to the LF display assembly510. Some example tracking devices include a camera assembly (“camera”), a depth sensor, structured light, a LIDAR system, a card scanning system, or any other tracking device that can track viewers within a target area.

The tracking system580may include one or more energy sources that illuminate some or all of the target area with light. However, in some cases, the target area is illuminated with natural light and/or ambient light from the LF display assembly510when presenting holographic content. The energy source projects light when executing instructions received from the controller520. The light may be, e.g., a structured light pattern, a pulse of light (e.g., an IR flash), or some combination thereof. The tracking system may project light in the visible band (˜380 nm to 750 nm), in the infrared (IR) band (˜750 nm to 1700 nm), in the ultraviolet band (10 nm to 380 nm), some other portion of the electromagnetic spectrum, or some combination thereof. A source may include, e.g., a light emitted diode (LED), a micro LED, a laser diode, a TOF depth sensor, a tunable laser, etc.

The tracking system580may adjust one or more emission parameter when executing instructions received from the controller520. An emission parameter is a parameter that affects how light is projected from a source of the tracking system580. An emission parameter may include, e.g., brightness, pulse rate (to include continuous illumination), wavelength, pulse length, some other parameter that affects how light is projected from the source assembly, or some combination thereof. In one embodiment, a source projects pulses of light in a time-of-flight operation.

The camera of the tracking system580captures images of the light (e.g., structured light pattern) reflected from the target area. The camera captures images when executing tracking instructions received from the controller520. As previously described, the light may be projected by a source of the tracking system580. The camera may include one or more cameras. That is, a camera may be, e.g., an array (1D or 2D) of photodiodes, a CCD sensor, a CMOS sensor, some other device that detects some or all of the light project by the tracking system580, or some combination thereof. In an embodiment, the tracking system580may contain a light field camera external to the LF display assembly510. In other embodiments, the cameras are included as part of the LF display source/sensor module514included in the LF display assembly510. For example, as previously described, if the energy relay element of a light field module512is a bidirectional energy layer which interleaves both emissive displays and imaging sensors at the energy device layer220, the LF display assembly510can be configured to simultaneously project light fields and record imaging information from the viewing area in front of the display. In one embodiment, the captured images from the bidirectional energy surface form a light field camera. The camera provides captured images to the controller520.

The camera of the tracking system580may adjust one or more imaging parameters when executing tracking instructions received from the controller520. An imaging parameter is a parameter that affects how the camera captures images. An imaging parameter may include, e.g., frame rate, aperture, gain, exposure length, frame timing, rolling shutter or global shutter capture modes, some other parameter that affects how the camera captures images, or some combination thereof.

The controller520controls the LF display assembly510and any other components of the LF display system500. The controller520comprises a data store522, a network interface524, a tracking module526, a viewer profiling module528, and a light field processing engine530. In other embodiments, the controller520comprises additional or fewer modules than those described herein. Similarly, the functions can be distributed among the modules and/or different entities in a different manner than is described here. For example, the tracking module526may be part of the LF display assembly510or the tracking system580.

The data store522is a memory that stores information for the LF display system500. The stored information may include display instructions, tracking instructions, emission parameters, imaging parameters, a virtual model of a target area, tracking information, images captured by the camera, one or more viewer profiles, calibration data for the light field display assembly510, configuration data for the LF display system510including resolution and orientation of LF modules512, desired viewing volume geometry, content for graphics creation including 3D models, scenes and environments, materials and textures, other information that may be used by the LF display system500, or some combination thereof. The data store522is a memory, such as a read only memory (ROM), dynamic random access memory (DRAM), static random access memory (SRAM), or some combination thereof.

The network interface524allows the light field display system to communicate with other systems or environments via a network. In one example, the LF display system500receives holographic content from a remote light field display system via the network interface524. In another example, the LF display system500transmits holographic content to a remote data store using the network interface524.

The tracking module526tracks viewers viewing content presented by the LF display system500. To do so, the tracking module526generates tracking instructions that control operation of the source(s) and/or the camera(s) of the tracking system580, and provides the tracking instructions to the tracking system580. The tracking system580executes the tracking instructions and provides tracking input to the tracking module526.

The tracking module526may determine a position of one or more viewers within the target area (e.g., sitting in the seats of a presentation space). The determined position may be relative to, e.g., some reference point (e.g., a display surface). In other embodiments, the determined position may be within the virtual model of the target area. The tracked position may be, e.g., the tracked position of a viewer and/or a tracked position of a portion of a viewer (e.g., eye location, hand location, etc.). The tracking module526determines the position using one or more captured images from the cameras of the tracking system580. The cameras of the tracking system580may be distributed about the LF display system500, and can capture images in stereo, allowing for the tracking module526to passively track viewers. In other embodiments, the tracking module526actively tracks viewers. That is, the tracking system580illuminates some portion of the target area, images the target area, and the tracking module526uses time of flight and/or structured light depth determination techniques to determine position. The tracking module526generates tracking information using the determined positions.

The tracking module526may also receive tracking information as inputs from viewers of the LF display system500. The tracking information may include body movements that correspond to various input options that the viewer is provided by the LF display system500. For example, the tracking module526may track a viewer's body movement and assign any various movement as an input to the LF processing engine530. The tracking module526may provide the tracking information to the data store522, the LF processing engine530, the viewer profiling module528, any other component of the LF display system500, or some combination thereof.

To provide context for the tracking module526, consider an example embodiment of an LF display system500that displays a video game in which a team defeats the enemy team by destroying their base. In response to the content, a viewer first pumps the air to show their excitement. The tracking system580may record the movement of the viewer's hands and transmit the recording to the tracking module526. This may be achieved with a tracking system580comprised of cameras, depth sensors, or other devices that are external to the light field display assembly510, or with a display surface which simultaneously projects light field images and records images, wherein the images recorded from the display surface may be light field images, or any combination of these devices, as previously described. The tracking module526tracks the motion of the viewer's hands in the recording and sends the input to LF processing engine530. The viewer profiling module528, as described below, determines that information in the image indicates that motion of the viewer's hands are associated with a positive response. Accordingly, if enough viewers are recognized having a positive response, the LF processing engine530generates appropriate holographic content to celebrate the victory. For example, the LF processing engine530may project confetti in the scene.

The LF display system500includes a viewer profiling module528configured to identify and profile viewers. The viewer profiling module528generates a profile of a viewer (or viewers) that views holographic content displayed by a LF display system500. The viewer profiling module528generates a viewer profile based, in part, on viewer input and monitored viewer behavior, actions, and reactions. The viewer profiling module528can access information obtained from tracking system580(e.g., recorded images, videos, sound, etc.) and process that information to determine various information. In various examples, viewer profiling module528can use any number of machine vision or machine hearing algorithms to determine viewer behavior, actions, and reactions. Monitored viewer behavior can include, for example, smiles, cheering, clapping, laughing, fright, screams, excitement levels, recoiling, other changes in gestures, or movement by the viewers, etc.

More generally, a viewer profile may include any information received and/or determined about a viewer viewing holographic content from the LF display system. For example, each viewer profile may log actions or responses of that viewer to the content displayed by the LF display system500. Some example information that can be included in a viewer profile are provided below.

In some embodiments, a viewer profile may describe a response of a viewer within the presentation space with respect to video game content displayed in the presentation space (e.g., a team, a type of content, an outcome etc.). For example, a viewer profile may indicate that a viewer generally has positive response to a particular eSports team having a mascot of a horse's head on a blue shield.

In some embodiments, a viewer profile can indicate characteristics of a viewer viewing a gaming application or a player playing a gaming application. For example, a user in a presentation space is wearing a particular outfit. In this case, the viewer profile can generate a virtual avatar that accurately represents the particular outfit. Further, the particular outfit may indicate some characteristics about the viewer. More broadly, viewer characteristics stored about the viewer can be, for example, age, sex, ethnicity, clothing, viewing location in the presentation space, etc.

In some embodiments, a viewer profile can indicate preferences for a viewer in regard to gaming application content, presentation space characteristics, or both. For example, a viewer profile may indicate that a viewer prefers only to view holographic content that is age appropriate for everyone in their family. In another example, a viewer profile may indicate holographic object volumes to display holographic content (e.g., on a wall) and holographic object volumes to not display holographic content (e.g., above their head). The viewer profile may also indicate that the viewer prefers to have haptic interfaces presented near them, or prefers to avoid them.

In another example, a viewer profile indicates a history of gaming applications for a particular viewer. For instance, viewer profiling module528determines that a viewer, or group of viewers, has previously executed a gaming application. As such the LF display system500may display holographic content that references previous actions taken while executing the gaming application. As one example, a gaming application including holographic content may comprise a series of four missions in increasing difficulty to trigger a victory. The viewer profile indicates that the viewers have completed two of the previous four missions. As such, the gaming application presents holographic video game content for the third mission to the viewers. The video game content may reference actions taken in the first or second mission.

In some embodiments, a viewer profile may also describe characteristics and preferences for a group of viewers rather than a particular viewer. For example, viewer profiling module528may generate a viewer profile for the audience viewing video game content generated by a gaming application in the presentation space. In one example, viewer profiling module528creates a viewer profile for viewers viewing an eSports event broadcasting a competition of a gaming application. The profile indicates that 54.3% of the viewers are men between the age of 20 and 35 and have a positive response to the gaming application. The profile also indicates that the remaining 46.7% of the viewers are having a mediocre response to the gaming application. Any of the previously described information and characteristics may be applied to a group of viewers.

The viewer profiling module528may also access a profile associated with a particular viewer (or viewers) from a third-party system (or systems) to build a viewer profile. For example, a viewer purchases a ticket for an eSports event using a third-party vendor that is linked to that viewer's social media account. When the viewer enters a presentation space for the gaming application using their ticket, the viewer profiling module528can access information from his social media account to build (or augment) a viewer profile.

In some embodiments, the data store522includes a viewer profile store that stores viewer profiles generated, updated, and/or maintained by the viewer profiling module528. The viewer profile can be updated in the data store at any time by the viewer profiling module528. For example, in an embodiment, the viewer profile store receives and stores information regarding a particular viewer in their viewer profile when the particular viewer views holographic content provided by the LF display system500. In this example, the viewer profiling module528includes a facial recognition algorithm that may recognize viewers and positively identify them as they view presented holographic content. To illustrate, as a viewer enters the target area of the LF display system500, the tracking system580obtains an image of the viewer. The viewer profiling module528inputs the captured image and identifies the viewer's face using the facial recognition algorithm. The identified face is associated with a viewer profile in the profile store and, as such, all input information obtained about that viewer may be stored in their profile. The viewer profiling module may also utilize card identification scanners, voice identifiers, a radio-frequency identification (RFID) chip scanners, barcode scanners, etc. to positively identify a viewer.

In embodiments where the viewer profiling module528can positively identify viewers, the viewer profiling module528can determine each visit of each viewer to the LF display system500. The viewer profiling module528may then store the time and date of each visit in the viewer profile for each viewer. Similarly, the viewer profiling module528may store received inputs from a viewer from any combination of the sensory feedback system570, the tracking system580, and/or the LF display assembly510each time they occur. The viewer profile system528may additionally receive further information about a viewer from other modules or components of the controller520which can then be stored with the viewer profile. Other components of the controller520may then also access the stored viewer profiles for determining subsequent content to be provided to that viewer.

The LF processing engine530generates holographic content comprised of light field data, as well as data for all of the sensory domains supported by a LF display system500. For example, LF processing engine530may generate 4D coordinates in a rasterized format (“rasterized data”) that, when executed by the LF display assembly510, cause the LF display assembly510to present holographic content. The LF processing engine530may access the rasterized data from the data store522. Additionally, the LF processing engine530may construct rasterized data from a vectorized data set. Vectorized data is described below. The LF processing engine530can also generate sensory instructions required to provide sensory content that augments the holographic objects. As described above, sensory instructions may generate, when executed by the LF display system500, haptic surfaces, sound fields, and other forms of sensory energy supported by the LF display system500. The LF processing engine530may access sensory instructions from the data store522, or construct the sensory instructions form a vectorized data set. In aggregate, the 4D coordinates and sensory data represent holographic content as display instructions executable by a LF display system to generate holographic and sensory content. More generally, holographic content can take the form of CG content with ideal light field coordinates, live action content, rasterized data, vectorized data, electromagnetic energy transported by a set of relays, instructions sent to a group of energy devices, energy locations on one or more energy surfaces, the set of energy propagation paths that are projected from the display surface, a holographic object that is visible to a viewer or an audience, and many other similar forms.

The amount of rasterized data describing the flow of energy through the various energy sources in a LF display system500is incredibly large. While it is possible to display the rasterized data on a LF display system500when accessed from a data store522, it is untenable to efficiently transmit, receive (e.g., via a network interface524), and subsequently display the rasterized data on a LF display system500. Take, for example, rasterized data representing a small amount of data generated by a gaming application for holographic projection by a LF display system500. In this example, the LF display system500includes a display containing several gigapixels and the rasterized data contains information for each pixel location on the display. The corresponding size of the rasterized data is vast (e.g., many gigabytes per second of holographic content), and unmanageable for efficient transfer over commercial networks via a network interface524. The efficient transfer problem may be amplified for applications including live streaming of holographic content. An additional problem with merely storing rasterized data on data store522arises when an interactive experience is desired using inputs from the sensory feedback system570or the tracking module526. To enable an interactive experience, the light field content generated by the LF processing engine530can be modified in real-time in response to sensory or tracking inputs. In other words, in some cases, LF content cannot simply be read from the data store522.

Therefore, in some configurations, data representing holographic content for display by a LF display system500may be transferred to the LF processing engine530in a vectorized data format (“vectorized data”). Vectorized data may be orders of magnitude smaller than rasterized data. Further, vectorized data provides high image quality while having a data set size that enables efficient sharing of the data. For example, vectorized data may be a sparse data set derived from a denser data set. Thus, vectorized data may have an adjustable balance between image quality and data transmission size based on how sparse vectorized data is sampled from dense rasterized data. Tunable sampling to generate vectorized data enables optimization of image quality for a given network speed. Consequently, vectorized data enables efficient transmission of holographic content via a network interface524. Vectorized data also enables holographic content to be live-streamed over a commercial network.

In summary, the LF processing engine530may generate holographic content derived from rasterized data accessed from the data store522, vectorized data accessed from the data store522, or vectorized data received via the network interface524. In various configurations, vectorized data may be encoded by an encoder before data transmission, and decoded by a decoder within the LF controller520after reception. The encoder and decoder pair may be part of the same proprietary system codec. In some examples, the vectorized data is encoded for added data security and gaming application improvements related to data compression. For example, vectorized data received by the network interface may be encoded vectorized data received from a holographic streaming application. In some examples, vectorized data may require a decoder, the LF processing engine530, or both of these to access information content encoded in vectorized data. The encoder and/or decoder systems may be available to customers or licensed to third-party vendors. Other example encoding and/or decoding schemes can be employed to transmit and/or present holographic content.

Vectorized data contains all the information for each of the sensory domains supported by a LF display system500in way that may support an interactive experience. For example, vectorized data for an interactive holographic experience may include any vectorized properties that can provide accurate physics for each of the sensory domains supported by a LF display system500. Vectorized properties may include any properties that can be synthetically programmed, captured, computationally assessed, etc. A LF processing engine530may be configured to translate vectorized properties in vectorized data to rasterized data. The LF processing engine530may then project holographic content translated from the vectorized data using the LF display assembly510. In various configurations, the vectorized properties may include one or more red/green/blue/alpha channel (RGBA)+depth images, multi view images with or without depth information at varying resolutions that may include one high-resolution center image and other views at a lower resolution, material properties such as albedo and reflectance, surface normals, other optical effects, surface identification, geometrical object coordinates, virtual camera coordinates, display plane locations, lighting coordinates, tactile stiffness for surfaces, tactile ductility, tactile strength, amplitude and coordinates of sound fields, environmental conditions, somatosensory energy vectors related to the mechanoreceptors for textures or temperature, audio, and any other sensory domain property. Many other vectorized properties are also possible.

The LF display system500may also generate an interactive viewing experience. That is, holographic content may be responsive to input stimuli containing information about viewer locations, gestures, interactions, interactions with holographic content, or other information derived from the viewer profiling module528, and/or tracking module526. For example, in an embodiment, a LF processing system500creates an interactive viewing experience using vectorized data of real-time video game data generated by a gaming application and received via a network interface524. In another example, if a holographic object needs to move in a certain direction immediately in response to a viewer interaction, the LF processing engine530may update the render of the scene so the holographic object moves in that required direction. This may require the LF processing engine530to use a vectorized data set to render light fields in real time based a 3D graphical scene with the proper object placement and movement, collision detection, occlusion, color, shading, lighting, etc., correctly responding to the viewer interaction. The LF processing engine530converts the vectorized data into rasterized data for presentation by the LF display assembly510. The LF display system500may employ various other encoding/decoding techniques that allow the LF display system to present holographic content in an approximately real time.

The rasterized data includes holographic content instructions and sensory instructions (display instructions) representing the real-time gaming application data. The LF display assembly510simultaneously projects holographic and sensory content of the real-time gaming application data by executing the display instructions. The LF display system500monitors viewer interactions (e.g., vocal response, touching, etc.) with the presented real-time gaming application data with the tracking module526and viewer profiling module528. In response to the viewer interactions, the LF processing engine may create an interactive experience by generating additional holographic and/or sensory content for display to the viewers.

To illustrate, consider an example embodiment of an LF display system500including a LF processing engine530that generates a plurality of holographic objects representing balloons falling from the ceiling of a presentation space during execution of a gaming application. A viewer may move to touch the holographic object representing the balloon. Correspondingly, the tracking system580tracks movement of the viewer's hands relative to the holographic object. The movement of the viewer is recorded by the tracking system580and sent to the controller520. The tracking module526continuously determines the motion of the viewer's hand and sends the determined motions to the LF processing engine530. The LF processing engine530determines the placement of the viewer's hand in the scene, adjusts the real-time rendering of the graphics to include any required change in the holographic object (such as position, color, or occlusion). The LF processing engine530instructs the LF display assembly510(and/or sensory feedback system570) to generate a tactile surface using the volumetric haptic projection system (e.g., using ultrasonic speakers). The generated tactile surface corresponds to at least a portion of the holographic object and occupies substantially the same space as some or all of an exterior surface of the holographic object. The LF processing engine530uses the tracking information to dynamically instruct the LF display assembly510to move the location of the tactile surface along with a location of the rendered holographic object such that the viewer is given both a visual and tactile perception of touching the balloon. More simply, when a viewer views his hand touching a holographic balloon, the viewer simultaneously feels haptic feedback indicating their hand touches the holographic balloon, and the balloon changes position or motion in response to the touch. In some examples, rather than presenting and interactive balloon in a content generated by a gaming application accessed from a data store522, the interactive balloon may be received as part of holographic content received from a live-streaming application via a network interface524. In other words, the holographic content displayed by the LF display system500may be a holographic content livestream.

LF processing engine530may provide holographic content to display to viewers in a presentation space before, during, and/or after execution of a gaming application to augment the presentation space experience. The holographic content may be provided by the publisher of the gaming application, provided by the presentation space, provided by an advertiser, generated by a LF processing engine530, etc. The holographic content may be content associated with the gaming application, the genre of the gaming application, the location of the presentation space, advertisements, etc. In any case, the holographic content may be stored in the data store522, or streamed to the LF display system500in vectorized format through the network interface524. For example, a video game content generated by a gaming application may be shown in a presentation space augmented with LF display modules on the walls. The distributor of the gaming application may provide holographic content to present on the wall displays before execution of the gaming application begins. The LF processing engine530accesses the holographic content and presents the accessed content from the displays on the walls of the presentation space before execution of the gaming application. In another example, a presentation space with an LF display system500is located in San Francisco. The LF display system of a presentation space stores a holographic representation of the Golden Gate Bridge to present before execution of the gaming application if no gaming application specific content is provided. Here, as no gaming application-specific holographic content is provided, the LF processing engine530accesses and presents the Golden Gate Bridge in the presentation space. In another example, an advertiser has provided holographic content of its products as advertisements to a presentation space to display after execution of a gaming application. After the execution of the gaming application concludes, the LF processing engine530presents the advertisements to the viewers as they leave the presentation space. In other examples, as described below, a LF processing engine may dynamically generate holographic content to display on the walls of the theater.

The LF processing engine500may also modify holographic content to suit the presentation space that is presenting the holographic content. For example, not every presentation space is the same size, has the same number of seats, or has the same technical configuration. As such, LF processing engine530may modify holographic content such that it will be appropriately displayed in a presentation space. In an embodiment, the LF processing engine530may access a configuration file of a presentation space including the layout, resolution, field-of-view, other technical specifications, etc. of the presentation space. The LF processing engine530may render and present the holographic content based on information included in the configuration file.

The LF processing engine530may also create holographic content for display by the LF display system500. Importantly, here, creating holographic content for display is different from accessing, or receiving, holographic content for display. That is, when creating content, the LF processing engine530generates entirely new content for display rather than accessing previously generated and/or received content. The LF processing engine530can use information from the tracking system580, the sensory feedback system570, the viewer profiling module528, the tracking module526, or some combination thereof, to create holographic content for display. In some examples, LF processing engine530may access information from elements of the LF display system500(e.g., tracking information and/or a viewer profile), create holographic content based on that information, and display the created holographic content using the LF display system500in response. The created holographic content may be augmented with other sensory content (e.g., touch, audio, or smell) when displayed by the LF display system500. Further, the LF display system500may store created holographic content such that it may be displayed in the future.

Dynamic Content Generation for a LF Display System

In some embodiments, the LF processing engine530incorporates an artificial intelligence (AI) model to create holographic content for display by the LF display system500. The AI model may include supervised or unsupervised learning algorithms including but not limited to regression models, neural networks, classifiers, or any other AI algorithm. The AI model may be used to determine viewer preferences based on viewer information recorded by the LF display system500(e.g., by tracking system580) which may include information on a viewer's behavior.

The AI model may access information from the data store522to create holographic content. For example, the AI model may access viewer information from a viewer profile or profiles in the data store522or may receive viewer information from the various components of the LF display system500. To illustrate, the AI model may determine a viewer enjoys seeing holographic content in which a performer wears a bow tie. The AI model may determine the preference based on a group of viewer's positive reactions or responses to previously viewed holographic content including a bow-tie wearing actor. That is, the AI model may create holographic content personalized to a set of viewers according to the learned preferences of those viewers. So, for example, the AI model may create bow-ties for actors displayed in the holographic content viewed by a group of viewers using the LF display system500. The AI model may also store the learned preferences of each viewer in the viewer profile store of the data store522. In some examples, the AI model may create holographic content for an individual viewer rather than a group of viewers.

One example of an AI model that can be used to identify characteristics of viewers, identify reactions, and/or generate holographic content based on the identified information is a convolutional neural network model with layers of nodes, in which values at nodes of a current layer are a transformation of values at nodes of a previous layer. A transformation in the model is determined through a set of weights and parameters connecting the current layer and the previous layer. For example, and AI model may include five layers of nodes: layers A, B, C, D, and E. The transformation from layer A to layer B is given by a function W1, the transformation from layer B to layer C is given by W2, the transformation from layer C to layer D is given by W3, and the transformation from layer D to layer E is given by W4. In some examples, the transformation can also be determined through a set of weights and parameters used to transform between previous layers in the model. For example, the transformation W4from layer D to layer E can be based on parameters used to accomplish the transformation W1from layer A to B.

The input to the model can be an image taken by tracking system580encoded onto the convolutional layer A and the output of the model is holographic content decoded from the output layer E. Alternatively or additionally, the output may be a determined characteristic of a viewer in the image. In this example, the AI model identifies latent information in the image representing viewer characteristics in the identification layer C. The AI model reduces the dimensionality of the convolutional layer A to that of the identification layer C to identify any characteristics, actions, responses, etc. in the image. In some examples, the AI model then increases the dimensionality of the identification layer C to generate holographic content.

The image from the tracking system580is encoded to a convolutional layer A. Images input in the convolutional layer A can be related to various characteristics and/or reaction information, etc. in the identification layer C. Relevance information between these elements can be retrieved by applying a set of transformations between the corresponding layers. That is, a convolutional layer A of an AI model represents an encoded image, and identification layer C of the model represents a smiling viewer. Smiling viewers in a given image may be identified by applying the transformations W1and W2to the pixel values of the image in the space of convolutional layer A. The weights and parameters for the transformations may indicate relationships between information contained in the image and the identification of a smiling viewer. For example, the weights and parameters can be a quantization of shapes, colors, sizes, etc. included in information representing a smiling viewer in an image. The weights and parameters may be based on historical data (e.g., previously tracked viewers).

Smiling viewers in the image are identified in the identification layer C. The identification layer C represents identified smiling viewers based on the latent information about smiling viewers in the image.

Identified smiling viewers in an image can be used to generate holographic content. To generate holographic content, the AI model starts at the identification layer C and applies the transformations W2and W3to the value of the given identified smiling viewers in the identification layer C. The transformations result in a set of nodes in the output layer E. The weights and parameters for the transformations may indicate relationships between an identified smiling viewers and specific holographic content and/or preferences. In some cases, the holographic content is directly output from the nodes of the output layer E, while in other cases the content generation system decodes the nodes of the output layer E into a holographic content. For example, if the output is a set of identified characteristics, the LF processing engine can use the characteristics to generate holographic content.

Additionally, the AI model can include layers known as intermediate layers. Intermediate layers are those that do not correspond to an image, identifying characteristics/reactions, etc., or generating holographic content. For example, in the given example, layer B is an intermediate layer between the convolutional layer A and the identification layer C. Layer D is an intermediate layer between the identification layer C and the output layer E. Hidden layers are latent representations of different aspects of identification that are not observed in the data, but may govern the relationships between the elements of an image when identifying characteristics and generating holographic content. For example, a node in the hidden layer may have strong connections (e.g., large weight values) to input values and identification values that share the commonality of “laughing people smile.” As another example, another node in the hidden layer may have strong connections to input values and identification values that share the commonality of “scared people scream.” Of course, any number of linkages are present in a neural network. Additionally, each intermediate layer is a combination of functions such as, for example, residual blocks, convolutional layers, pooling operations, skip connections, concatenations, etc. Any number of intermediate layers B can function to reduce the convolutional layer to the identification layer and any number of intermediate layers D can function to increase the identification layer to the output layer.

In one embodiment, the AI model includes deterministic methods that have been trained with reinforcement learning (thereby creating a reinforcement learning model). The model is trained to increase the quality of the experience during execution of the gaming application using measurements from tracking system580as inputs, and changes to the created holographic content as outputs.

Reinforcement learning is a machine learning system in which a machine learns ‘what to do’—how to map situations to actions—so as to maximize a numerical reward signal. The learner (e.g. LF processing engine530) is not told which actions to take (e.g., generating prescribed holographic content), but instead discovers which actions yield the most reward (e.g., increasing the quality of holographic content by making more people cheer) by trying them. In some cases, actions may affect not only the immediate reward but also the next situation and, through that, all subsequent rewards. These two characteristics—trial-and-error search and delayed reward—are two distinguishing features of reinforcement learning.

Reinforcement learning is defined not by characterizing learning methods, but by characterizing a learning problem. Basically, a reinforcement learning system captures those important aspects of the problem facing a learning agent interacting with its environment to achieve a goal. That is, in the example of generating a song for a performer, the reinforcement learning system captures information about viewers in the presentation space (e.g., age, disposition, etc.). Such an agent senses the state of the environment and takes actions that affect the state to achieve a goal or goals (e.g., creating a pop song for which the viewers will cheer). In its most basic form, the formulation of reinforcement learning includes three aspects for the learner: sensation, action, and goal. Continuing with the song example, the LF processing engine530senses the state of the environment with sensors of the tracking system580, displays holographic content to the viewers in the environment, and achieves a goal that is a measure of the viewer's reception of that song.

One of the challenges that arises in reinforcement learning is the trade-off between exploration and exploitation. To increase the reward in the system, a reinforcement learning agent prefers actions that it has tried in the past and found to be effective in producing reward. However, to discover actions that produce reward, the learning agent selects actions that it has not selected before. The agent ‘exploits’ information that it already knows in order to obtain a reward, but it also ‘explores’ information in order to make better action selections in the future. The learning agent tries a variety of actions and progressively favors those that appear to be best while still attempting new actions. On a stochastic task, each action is generally tried many times to gain a reliable estimate to its expected reward. For example, if the LF processing engine creates holographic content that the LF processing engine knows will result in a viewer laughing after a long period of time, the LF processing engine may change the holographic content such that the time until a viewer laughs decreases.

Further, reinforcement learning considers the whole problem of a goal-directed agent interacting with an uncertain environment. Reinforcement learning agents have explicit goals, can sense aspects of their environments, and can choose actions to receive high rewards (i.e., a roaring crowd). Moreover, agents generally operate despite significant uncertainty about the environment they face. When reinforcement learning involves planning, the system addresses the interplay between planning and real-time action selection, as well as the question of how environmental elements are acquired and improved. For reinforcement learning to make progress, important sub problems have to be isolated and studied, the sub problems playing clear roles in complete, interactive, goal-seeking agents.

The reinforcement learning problem is a framing of a machine learning problem where interactions are processed and actions are carried out to achieve a goal. The learner and decision-maker is called the agent (e.g., LF processing engine530). The thing it interacts with, comprising everything outside the agent, is called the environment (e.g., viewers in a presentation space, etc.). These two interact continually, the agent selecting actions (e.g., creating holographic content) and the environment responding to those actions and presenting new situations to the agent. The environment also gives rise to rewards, special numerical values that the agent tries to maximize over time. In one context, the rewards act to maximize viewer positive reactions to holographic content. A complete specification of an environment defines a task which is one instance of the reinforcement learning problem.

To provide more context, an agent (e.g., LF processing engine530) and environment interact at each of a sequence of discrete time steps, i.e. t=0, 1, 2, 3, etc. At each time step t the agent receives some representation of the environment's state st (e.g., measurements from tracking system580). The states st are within S, where S is the set of possible states. Based on the state st and the time step t, the agent selects an action at (e.g., making the performer do the splits). The action at is within A(st), where A(st) is the set of possible actions. One time state later, in part as a consequence of its action, the agent receives a numerical reward rt+1. The states rt+1 are within R, where R is the set of possible rewards. Once the agent receives the reward, the agent selects in a new state st+1.

At each time step, the agent implements a mapping from states to probabilities of selecting each possible action. This mapping is called the agent's policy and is denoted πt where πt(s,a) is the probability that at=a if st=s. Reinforcement learning methods can dictate how the agent changes its policy as a result of the states and rewards resulting from agent actions. The agent's goal is to maximize the total amount of reward it receives over time.

This reinforcement learning framework is flexible and can be applied to many different problems in many different ways (e.g. generating holographic content). The framework proposes that whatever the details of the sensory, memory, and control apparatus, any problem (or objective) of learning goal-directed behavior can be reduced to three signals passing back and forth between an agent and its environment: one signal to represent the choices made by the agent (the actions), one signal to represent the basis on which the choices are made (the states), and one signal to define the agent's goal (the rewards).

Of course, the AI model can include any number of machine learning algorithms. Some other AI models that can be employed are linear and/or logistic regression, classification and regression trees, k-means clustering, vector quantization, generative adversarial networks, etc. Whatever the case, generally, the LF processing engine530takes an input from the tracking module526and/or viewer profiling module528and a machine learning model creates holographic content in response. Similarly, the AI model may direct the rendering of holographic content.

In an example, the LF processing engine530augments a virtual character for a gaming application. The LF processing engine530augments the virtual character using information included in the viewer profiles stored in the data store522. For example, information included in stored viewer profiles indicates that a large number of viewers are drawn to warrior priestesses with mohawks. As such, the LF processing engine530augments a virtual character so that it is displayed by the LF display system500as a female barbarian shaman with an orange mohawk. More explicitly, LF processing engine530accesses the viewer profiles of the viewers in a presentation space. LF processing engine530parameterizes (e.g., quantifies) information in each viewer profile. For example, LF processing engine530can quantify characteristics such as the age, location, sex, preferences, etc. of a viewer. Further, LF processing engine530can parameterize other information included in a viewer profile. For example, a viewer profile may indicate that a viewer has executed gaming applications including four strong female leads, prefers playing mage classes, and generally prefers crazy haircuts for their characters. The viewer profile may quantify this tendency (e.g., generates a score indicating a viewer's interest in female athletes). LF processing engine530inputs the parameterized user profiles into an AI model (e.g., a neural network) configured to generate characteristics of a virtual athlete based on input parameters and receives characteristics for the virtual character in response. LF processing engine530then inputs the characteristics for the virtual athlete into an AI model (e.g., a procedural generation algorithm) configured to generate an athlete given a set of physical characteristics and generates a virtual female athlete. While this disclosure frequently uses the example of characters as the entity that the users are controlling in gaming applications, in some games (like puzzle games or indie games) the controlled entities can be non-sentient objects. For example, a puzzle game might have blocks moved by some 3D cursor in a holographic video game.

LF processing engine530can create holographic content based on a gaming application being shown in the presentation space. For example, a gaming application being shown in the presentation space may be associated with a set of metadata describing the gaming application's characteristics. The metadata may include, for example, the setting, genre, gaming application type, themes, titles, play-times, etc. LF processing engine530may access any of the metadata describing the gaming application and generate holographic content to present in the presentation space in response. For example, a gaming application titled “The Excellent Bowl” is a game about to be played in a presentation space augmented with a LF display system500. The LF processing engine530accesses the metadata of the gaming application to create holographic content for the walls of the presentation space before the gaming application begins. Here, the metadata includes a vibrant stadium setting appropriate for The Excellent Bowl. The LF processing engine530inputs the metadata into an AI model and receives holographic content to display on the walls of the presentation space in response. In this example, the LF processing engine530creates a beachside sunset to display on the walls of the presentation space before the gaming application begins to play.

In an example, the LF processing engine530creates holographic content based on the viewers present at a presentation space including a LF display system500. For example, a group of viewers enters a presentation space to view an eSports event of a gaming application that will be augmented by holographic content displayed by the LF display system500. Viewer profiling module528generates a viewer profile for the viewers in the presentation space and an aggregate viewer profile representing all of the viewers in the presentation space. LF processing engine530accesses the aggregate viewer profile and creates holographic content to display to the viewers in the presentation space. For example, the viewers in a presentation space are a group of eSports fans viewing a team working towards a championship, and, therefore, the aggregate viewer profile includes information indicating that they may enjoy holographic content commensurate with super-fans for a sports team (e.g., through parameterization and input into an AI model). As such, the LF processing engine530generates holographic content such that the presentation space is a more rowdy atmosphere (e.g., foam fingers, chants, noise makers, etc.).

In an example, the LF processing engine530creates holographic content based on the responses of viewers viewing execution of the gaming application. For example, viewers in a presentation space are viewing a video game data generated by a gaming application and presented as holographic content in a presentation space augmented by a LF display system500. The tracking module526and the viewer profiling module528monitor the reaction of the viewers viewing the video game data. For example, tracking module526may obtain images of viewers as they view the video game data. Tracking module526identifies the viewer, and viewer profiling module528may use machine vision algorithms to determine a reaction of the viewer based on information included in the image. For example, an AI model can be used to identify if a viewer viewing the video game data is smiling and, accordingly, viewer profiling module528can indicate in the viewer profile if the viewer has a positive or negative response to the video game data based on the smile. Other reactions may also be determined. The tracking module may determine information about viewers including the position of the viewer, a movement of the viewer, a gesture of the viewer, an expression of the viewer, an age of the viewer, a sex of the viewer, an ethnicity of the viewer, or a clothing worn by the viewer. This information may be shared with the viewer profiling module528to generate a viewer profile.

The LF processing engine530may create holographic content based on previously existing or provided advertisement content. That is, for example, the LF processing engine530can request an advertisement from a network system via network interface524, the network system provides the holographic content in response, and the LF processing engine530creates holographic content for display including the advertisement. Some examples of advertisement can include, products, text, videos, etc. Advertisements may be presented to specific viewing volumes based on the viewers in that viewing volume. Similarly, holographic content may augment a video game data with an advertisement (e.g., a product placement). Most generally, the LF processing engine530can create advertisement content based on any of the characteristics and/or reactions of the viewers in the presentation space as previously described.

The preceding examples of creating content are not limiting. Most broadly, LF processing engine530creates holographic content for display to viewers of a LF display system500. The holographic content can be created based on any of the information included in the LF display system500.

Gaming Network

FIG.5Billustrates an example LF gaming network550, in accordance with one or more embodiments, comprised of one or more gaming systems (e.g., gaming systems560A,560B,560C, etc.). Each gaming system may include an LF display system (e.g., LF display system500A,500B, etc.) and a gaming application, or be a standalone gaming system with a gaming application (e.g., gaming system560C). The LF display system may be similarly configured to the LF display systems described herein (e.g. LF display system100,400,500). The gaming application is a piece of software configured to execute a video game which may produce holographic video game data. In some embodiments, the game application may produce both holographic video game data and 2D data, or there may be separate versions of the game application, only one of which produces holographic video game data.

The LF display system may use the holographic video game data to generate holographic video game content for presenting the video game data. The LF gaming network550also includes a gaming server554, a network system556, and a network552. The gaming server, as described in more detail below, may be employed to administer the game (i.e. manage and/or coordinate the gaming application590). The network system may be configured to manage the digital rights and distribution of the gaming application, video game data, and holographic video game content. The network552is a network that allows information to travel between entities in the LF gaming network550. In other embodiments, the LF gaming network550comprises additional or fewer entities than those described herein. Similarly, the functions can be distributed among the different entities in a different manner than is described here.

More broadly, the gaming server554generates video game data. In some embodiments this video game data may be holographic video game content. In other embodiments, the gaming server554administers the video game for one or more gaming application instances. The gaming server may enforce the rules, keep track of the game state, and/or synchronize the game state between multiple gaming applications. In these embodiments, each video gaming application590may generate holographic video game content for presentation by the corresponding LF display system500A or500B. The holographic video game content may be comprised of the holographic video content, along with any recorded sensory data or synthetic data of an event that may be projected by a LF display system when showing a video game. For example, the sensory data may include recorded audio, recorded images, recorded interactions with objects, etc. Many other types of sensory data may be used. To illustrate, the recorded visual content may include: 3D graphics scenes, 3D models, object placement, textures, color, shading, and lighting; 2D video game data which can be converted to a holographic form using an AI model and a large data set of similar video game conversions; multi-view camera data from a camera rig with many cameras with or without a depth channel; plenoptic camera data; CG content; or other types of recorded sensory data of an event as described herein.

In some configurations, the holographic video game content may be encoded via a proprietary encoder to perform the encoding operation that reduces the holographic video game content for a video game into a vectorized data format as described above. In some instances, the gaming server554generates and encodes the holographic video game content to a vectorized format, as described above, before it is sent over the network554. Encoding data to vectorized data may include image processing, audio processing, or any other computations that may result in a reduced data set that is easier to transmit over the network552. The encoder may support formats used by video game-making industry professionals. In other configurations, the gaming server554may transmit holographic video game content to the network system556and or LF display system without encoding the content. In other configurations, the gaming server554transmits normal video game data to the video gaming application590, which generates the holographic video game data which may then be encoded to a vectorized format for the LF display system500A or500B.

Each gaming system (e.g.,560A,560B,560C) may receive video game data from the network552via a network interface (e.g., network interface524). In some embodiments, as described above, the video game data may be holographic video game data which is encoded, possibly in a vectorized format, and the LF processing engine530of each LF display system500includes a decoder to decode the encoded holographic video game content. In other embodiments, normal video game data is received by the gaming application590, and the gaming application generates holographic video game data, which may or may not be encoded before being sent to the LF processing engine530. Upon receiving encoded data, a LF processing engine530generates rasterized data for the LF display assembly510by applying decoding algorithms provided by the decoder to the received encoded holographic game content. In some examples, the LF processing engine may additionally generate rasterized data for the LF display system using input from a tracking module, viewer profiling module, and the sensory feedback system of the LF display system as described herein. In some embodiments, rasterized data generated for the LF display assembly510reproduces the game content generated by the gaming application590which is in turn administered by the gaming server554. Importantly, each LF display system500A and500B generates rasterized data suitable for the particular configuration of the LF display assembly in terms of geometry, resolution, etc. for where the gaming system is located. In some configurations, the encoding and decoding process is part of a proprietary encoding/decoding system pair (or ‘codec’) which may be offered to display customers or licensed by third parties. In some instances, the encoding/decoding system pair may be implemented as a proprietary API that may offer content creators a common programming interface.

In some configurations, the various LF gaming systems560A,560B in the LF gaming network550may have different hardware configurations corresponding to different LF display assemblies510of the corresponding LF display systems500A,500B, respectively. Hardware configurations can include arrangement of physical systems, energy sources, energy sensors, haptic interfaces, sensory capabilities, resolutions, fields-of-view, LF display module configurations, or any other hardware description of a system in the LF gaming network550. Each hardware configuration may generate, or utilize, sensory data in different data formats. As such, a decoder system may be configured to decode encoded game content for the LF display system on which it will be presented. For example, a LF gaming system560A executes the gaming application590which generates the holographic game content as an input for the corresponding LF display system500A. The LF display system500A has a first hardware configuration and the decoding system accesses information describing this first hardware configuration of the gaming system560A. The decoding system decodes the encoded game content using the accessed hardware configuration such that the decoded game content can be processed by the LF processing engine530of the receiving LF display system500A. The LF processing engine530generates and presents rasterized game content for the first hardware configuration despite being generated in a format which is agnostic to the resolution, field of view, projected rays per degree, etc., of any particular hardware configuration. In a similar manner, similar holographic game content generated by the gaming application590of the LF gaming system560B can be presented by the second hardware configuration. In general, holographic game content may be generated for any gaming system (e.g., gaming system560A,560B) whatever the hardware configurations of the LF display assembly of the corresponding LF display system500A,500B. Various other aspects that may be included in the hardware configuration may include: a resolution, a number of projected rays per degree, a field of view, a deflection angle on the display surface, and a dimensionality of the display surface, etc. Additionally, the hardware configuration may also include, a number of display panels of the LF display assembly, a relative orientation of the display panels, a height of the display panels, a width of the display panels, and a layout of the display panels.

Similarly, various gaming systems in the LF gaming network550may have different geometric orientations. Geometric orientations reflect the physical size, layout, and arrangement of the various modules and system included the LF display system of the gaming system. As such, a decoder system may be configured to decode encoded game content for the LF display system in the geometric configuration on which it will be presented. For example, a gaming system (e.g., gaming system560A) including an LF display system (e.g., LF display system500A) having a first geometric configuration receives encoded game content from a gaming server (e.g., gaming server554) generating game content agnostic of any particular geometric configuration. The decoding system accesses information describing the first geometric configuration of the LF gaming system560A. The decoding system decodes the encoded game content using the accessed geometric configuration such that the decoded game content can be processed by the LF processing engine530of the receiving LF display system500A. The LF processing engine530generates and presents content for the first geometric configuration despite being recorded in a format which is agnostic of a particular geometric configuration. In a similar manner, holographic content generated by the gaming server554can be presented by any LF gaming system (e.g., LF display system560B) whatever the geometric configurations. Various other aspects that may be included in the geometric configuration may include: a number of display panels (or surfaces) of the LF display assembly, a relative orientation of the display panels.

Similarly, various gaming display area spaces in the LF gaming network550may have different configurations. Presentation space configurations reflect any of the number and/or position of holographic object volumes, the number and/or position of viewing volumes, and a number and/or position of viewing locations relative to a LF display system. As such, a decoder system may be configured to decode encoded game content for the gaming display area in which it will be presented. For example, a gaming system (e.g., gaming system560A) having a LF display system (e.g., LF display system500A) with a LF display assembly510having a first gaming display area space receives encoded game content from a gaming application590generated for a generic gaming display area (or some other space). The decoding system accesses information describing the gaming display area space particular to LF display system500A. The decoding system decodes the encoded video game content for the appropriate gaming display area such that the decoded data can be processed by the LF processing engine530installed in the gaming display area. The LF processing engine530generates and presents content for the gaming display area despite being generated for presentation in a general gaming display area.

The network system556is any system configured to manage the transmission of game content between systems in a LF gaming network550. For example, the network system556may receive a request for game content from a gaming system560A and facilitate transmission of the game content to the display system500A from the gaming server554. The network system556may also store game content, viewer profiles, additional holographic or sensory content, etc. for transmission to, and/or storage by, other gaming systems560in the LF gaming network550. The network system556may also include a LF processing engine530that can generate holographic content as previously described.

The network system556may include a digital rights management (DRM) module to manage the digital rights of the game content. As previously discussed, in some instances the gaming server only exchanges normal non-holographic game state data with one or more gaming applications590. In other instances, the gaming server554may transmit holographic content to the network system556and the DRM module may encrypt the holographic content using a digital encryption format. In other examples, the gaming server554encodes recorded light field data into a holographic content format that can be managed by the DRM module. The network system556may provide a key to the digital encryption key to a LF display system of a LF gaming system560A or560B such that each corresponding LF display system500A or500B, respectively, can decrypt and subsequently display the holographic content to viewers. Most generally, the network system556and/or the gaming server554encodes the video game data and a LF display system may decode the video game data.

The network system556may act as a repository for previously recorded and/or created holographic content (e.g., game content). Each piece of holographic content may be associated with a transaction fee that, when received, causes the network system556to transmit the holographic content to the LF display system500that provides the transaction fee. For example, A LF display system500A may request access to the holographic content via the network552. The request includes a transaction fee for the holographic content. In response, network system556transmits the holographic content to the LF display system for display to viewers. In other examples, the network system556can also function as a subscription service for holographic content stored in the network system. In another example, gaming server554is processing game data in real-time and generating holographic content representing a video game. A gaming system560A transmits a request for the game content to the gaming server554. The request includes a transaction fee for the game content. In response, the gaming server554transmits the game content for display on the gaming system. As an example, a user of a gaming system560A may transmit a payment to the network system556to access a particular outfit for their character in a gaming application590. More generally, the network system556may act as a mediator in exchanging transaction fees and/or managing holographic content data flow across the network552. Additionally, in some cases, the network system is capable of modifying holographic content such that it is presentable by the LF display system receiving the holographic content.

In some examples, the network system556may act as a platform for gaming applications (e.g., STEAM). In this case, the network system may store save files, profiles, digital rights, payment information, gaming system configuration information, etc. Further the network system may store different gaming applications such that gaming systems in the gaming network550may access and download additional gaming applications. The network system550may also store any of the aforementioned information associated with particular gaming applications. For example, the network system may store the digital rights for a first gaming application and a second gaming application installed on a particular gaming system in the network. In another example, the network system may store save files for multiple gaming applications such that a user can access the save files from different gaming systems within the LF gaming network556.

The network552represents the communication pathways between systems in a LF gaming network550. In one embodiment, the network is the Internet, but can also be any network, including but not limited to a local area network (LAN), a metropolitan area network (MAN), a wide area network (WAN), a mobile, wired or wireless network, a cloud computing network, a private network, or a virtual private network, and any combination thereof. In addition, all or some of links can be encrypted using conventional encryption technologies such as the secure sockets layer (SSL), Secure HTTP and/or virtual private networks (VPNs). In another embodiment, the entities can use custom and/or dedicated data communications technologies instead of, or in addition to, the ones described above.

Gaming Applications in a Gaming Network

As described above, gaming systems can include various LF gaming systems560A,560B, each with a corresponding LF display system500A,500B, respectively, and the gaming systems can be networked together to form a LF gaming network550. Within the gaming network550, the gaming systems execute gaming applications. The gaming applications may execute on standalone hardware, or be incorporated into the LF display system. In either case, the gaming applications are software programs that allow a user to participate in a video game. Video games include wide variety of categories including, for example, simulations, strategy, battle royales, etc. The video games generate content that may provide entertainment to a user executing the gaming application.

The video game content generated by a gaming application (e.g., gaming application590) may be displayed as holographic content on a LF display system (e.g., LF display system500A). The game application590provides one of several data formats representing the video game content to a controller (e.g., controller520) and a LF processing engine (e.g., LF processing engine530) generates holographic content from the video game content.

In an embodiment, the data input into the controller520(“input data”) originates from the gaming application590. For example, the input data may represent a computer generated (“CG”) scene from within the gaming application. The scene may include, for example, geometry, textures, lighting and material properties. The scene may be rendered according to several methods. For example, rendered with multiple views corresponding to multiple perspectives with or without a depth channel, as a 4D or 5D (4D+depth) light field, as a deep image (i.e., a format that allows for multiple RGBAZ (red, green, blue, alpha, and Z) samples per X-Y energy source location coordinate), as a center view with a larger resolution and multiple side views corresponding to multiple perspectives at lower resolution, as depth maps to attain a lower bandwidth for data, as multiple views with multiple side views corresponding to different perspectives with only incremental RGB channel dataset differences recorded. Rendering may combine any number of methods or may include additional methods not disclosed here. More generally, the input data may be rendered in a format which retains the data of the light field and the data describing the light intensity as a function of viewing angle on every projected holographic surface.

In an embodiment, the gaming application590provides metadata about the hardware configuration (such as resolution, rays per degree, geometry, etc) of display assembly510to the gaming server554, which the gaming server554uses to generate rasterized LF data which is then provided to the LF display system over the network552, possibly for further manipulation to take into account display calibration (e.g. required warping, local color calibration, etc) before being displayed on the LF display assembly510. In other embodiments, the gaming server554supports a number of common hardware configurations, and delivers more than one stream of LF data to gaming applications590.

In some examples, gaming applications or gaming systems may be provided with tools that enable the generation of holographic content from input data. For example, a gaming application may be provided with a toolkit, a library, a codec, an application programming interface (API), etc. (“tools”) that enables rendered input data to be converted to holographic output data and presented to the LF processing engine530of the LF controller520within the LF display system500. Further, the tools may allow for the gaming applications and/or gaming systems to convert input data for specific LF display systems. For example, in consideration of multiple models of LF display systems, the gaming applications may employ a tool that allows the input data to be usable for all these models of LF display systems.

Holographic content for different energy domains (e.g., audio, tactile, etc.) may also be generated from video game data input to the LF processing engine530. For example, the input data may include audio data from the game (e.g. voices or gunshots). The LF processing engine530may generate holographic content with the audio data encoded. In another example, the input data may include tactile information for surface textures or propagating pressure waves (e.g. from explosions or hand-to-hand combat). In this case, the LF processing engine530may generate holographic content enabling a LF display system with ultrasonic transducers as part of the sensory feedback system570to generate ultrasonic energy waves and/or to tactile surfaces representing the surfaces within the holographic video game data. In some cases, the LF processing engine530may interpret input data not having sensory data and generate sensory content for that data. For example, the input data may only include a visual representation of an explosion, and the LF processing engine530may generate corresponding holographic content including tactile feedback for the explosion.

A gaming application may be implemented in a variety of manners within a LF gaming network. For example, a gaming application may be implemented as a client-server gaming application. In these examples, there is an authoritative single or multiplayer server (e.g., gaming server554) that is responsible for administering a game state. One or more player clients (e.g., a gaming application590) executing on one or more gaming systems (e.g., LF gaming system560A) can connect to this gaming server and participate in a game, thereby impacting the game state. Herein, the gaming server554may administer a game experience of one or more game applications using a variety of methods, including, for example, collecting inputs from gaming systems, collecting timestamps from clients, synchronizing a game state between connected gaming systems, enforcing global rules of the gaming application, verifying and enforcing gaming application actions, sending game state snapshots to connected gaming systems, etc. Other examples are also possible.

A client-server architecture enables a gaming application590to forward user inputs to the gaming server554for execution. For example, a user executes a gaming application from a local computational device (e.g., LF gaming system560A) and the gaming server554resides on a computational device remote from the player. In an embodiment, the gaming system560A may execute a version of the game application (e.g., gaming application590), and the server may run an identical version of the game application. In this embodiment, no holographic video game data is transferred over the network552. In another embodiment, the gaming server554may execute a version of the game application configured to manage multiplayer connectivity. The gaming server554receives input data sampled from the gaming application590executing on the gaming system560A and generates holographic video game content using the inputs. In at least one embodiment, this holographic video game data is encoded (e.g. to a vectorized format) and transferred over the network552to the LF gaming system560A,560B.

In a client-server architecture, a game state for the gaming application is maintained by the gaming server554. The game state includes all data and logic necessary to execute the game. Some examples of game state information include, for example, current player resources, positions, and all the objects that makes up a snapshot of the game at a single point in time. Generally, gaming systems accessing the gaming server554are not allowed to define the main game state, but rather receive it from the server. However, to offer a player an illusion of immediate responsiveness, a gaming system may predict the flow of the game, and make soft changes to the game state that are shown to the player. In this case, the gaming server554may adjust or even erase some of these changes via reconciliation with the server's game state updates upon arrival from the server. When the updated server game state arrives, the predicted state of the gaming application at the gaming system is recomputed from the updated game state as well as the player inputs that the gaming system has sent to the server but have not yet been acknowledged. In responding to a gaming system, the gaming server554may also calculate the game state by taking into account network delays using a technique such as lag compensation (e.g. rewinding time by an appropriate amount in order to determine what a shooting client saw at the time the shot was fired to determine if the target was hit at that earlier time). The data that a gaming system receives from a gaming server554running game logic is sometimes known as game data.

In some embodiments, the LF gaming network shown inFIG.5Bmay have an alternate configuration. For example, in one embodiment, a game application590is both a server and a client. A gaming application590could be single player and not a server until a time (possibly mid game play) in which the application starts allowing other connections over a network. The application is then both the authority of the multi-player game and also the single player gaming client and interface. Sometimes this could be implemented as spawning a server as a child process of the game. Minecraft would be an example of such a game, designed in such a way that a child process of the application is a server which only listens to localhost unless the user of the application clicks a UI button (while the game is running and the character is in the world driven by the child server process) which then opens the port to the network interface524.

In another example, the gaming application may be implemented as a gaming-as-a-service with remote rendering (RR-GaaS) architecture (“remote rendering architecture”) within the LF gaming network550. In the remote rendering architecture, an instance of a gaming application exists on a remote server (e.g., gaming server554) in the LF gaming network550. An encoder module renders every frame a game scene, compresses the data for the scene, and transmits the data for the scene to a client (e.g., a LF gaming system560A). The LF gaming system that receives the data decodes and displays the holographic video game data. A player operating the gaming system interacts with the content displayed on the gaming system as the data is streamed from the gaming server554. The player response is encoded and sent to a decoder on the gaming server554, and the gaming server554decodes these player responses and employs them as control inputs for the game state. This remote-rendering model requires a large bandwidth for the transfer of the video to the local client, but keeps the computation of the game engine on a gaming server554.

In the remote rendering architecture, the gaming server may employ a LF processing engine (e.g., LF processing engine530) that generates holographic content of the game administered on the gaming server554. The gaming server554compresses the holographic video game content into a vectorized format, encodes it with a codec, and transmits the data to a LF gaming system560A,560B via a network552. A LF display system500A,500B of the receiving LF gaming system560A,560B decodes the vectorized data into a rasterized format for display to the client using the LF display system500A,500B, respectively.

In the remote rendering architecture, the LF display system500A,500B of a LF gaming system560A,560B may take into account the exact hardware configuration (e.g. resolutions, FOV's, display panel topology, deflection angles, calibration data, etc.) and/or viewing volume of the corresponding LF display assembly510of the gaming system560A,560B when displaying the holographic video game content. In an example, the LF display system may read a set of configuration or database files to generate the appropriate display instructions for presentation of the holographic content. The decoding codec could be the same codec used to encode the data, and this may be proprietary to the LF display system manufacturer or the game developer. Similar to the client-server architecture discussed above, player input as a response to the updated holographic video game content may be sampled and forwarded to the gaming server550using a conventional device (e.g. a keyboard or joystick).

In another example, the gaming application may be implemented as a gaming-as-a-service with local rendering (LR-GaaS) architecture (“local rendering architecture”) within the LF gaming network550. In a local rendering architecture, an instance of a gaming application exists on a gaming server in the LF gaming network550, but the gaming server554does not render holographic video content to send to a gaming system. Instead, the gaming server554accepts inputs from one or more connected gaming systems, executes gaming logic on the inputs, and, subsequently, determines the game state for connected gaming systems. The gaming server554transmits game state data to the connected gaming systems via the network552and each gaming system renders holographic video game content of the game state locally. In an embodiment, gaming server554transmits game state data as high-level rendering instructions that allows each gaming system to draw each frame independently. In another embodiment, an instance of the game application may be executing locally on a gaming system and the gaming server is configured to transmit game state data to update the game state of the local gaming application. In this case, the gaming application may locally update and render the current game state many times per second (e.g., to account for moves by opponents or other game play action).

In this local rendering architecture, a gaming application executing on a gaming system may provide one of several data formats to a LF processing engine530of a LF display system500executing locally. Additionally, The LF processing engine530uses this data to render the rasterized video data for the LF display assembly, taking into account the hardware configurations, viewing volumes, etc. of the gaming system to appropriately present the rendered holographic video game content.

In another example, the gaming application may be implemented as a peer-to-peer architecture within the LF gaming network550. In the peer-to-peer architecture, a gaming server554does not administrate the global game state. Instead, every connected gaming system controls its local game state. A peer (e.g., a gaming system) sends data to all other peers, and receives data from them, with the overall assumption that this information is reliable and correct. In this architecture, one or more of the peers may render a view of the game using a LF display system. This process is similar to the methods described herein and above. The gaming system may also transmit vectorized holographic video game content or video game data to other connected gaming systems and/or spectators.

There are several different revenue models for the various architectures of the LF gaming network550. For example, the LF gaming network550may employ a subscription service. That is, the LF gaming network may require a payment for managing the transmission, execution, and/or administration of a gaming application590within the LF gaming network550. In the payment may be for a period of time (e.g. monthly), a per-game subscription particular game, or payments to access gaming service on a gaming server554.

In an embodiment, the subscriptions may include different tiers. Each different tier may correspond to a different level of holographic content experience. For example, the lowest tier may include lower levels of rendering, while the highest tier includes higher level of rendering. In another example, each tier can include additional sensory experiences. To illustrate, a first tier may only include light-field and auditory content, while a second tier includes light-field, auditory, and tactile content. In some examples, a tier may be reserved for two dimensional representations of the gaming application such that it may be employed by a gaming system without a LF display system. Gaming systems including a LF display system may increase the value of a gaming marketplace, particularly, if purchasable content can be viewed as holograms.

The LF gaming system may support various types of spectators for the gaming application. In an embodiment, a LF display system500A,500B of a LF gaming system560A,560B may compress rendered holographic video game content presented by the LF display system into a vectorized format, encode it with a codec, and the LF gaming system may transmit the data as a live stream to other spectators via the network552. The stream could take various formats. For example, gaming system may stream compressed video corresponding to different views. A first view may be an independent view of the game action. A second view may be localized to the viewpoint of the gaming system providing the stream. Spectators in the LF gaming network550may execute an instance of the gaming application on a LF gaming system to view the holographic video game content. The stream may update the game data of spectator's local gaming application in real time, or with a small time delay. A time delay may exist due to network delays, or be intentionally inserted in order to prevent cheating (e.g. by buffering data from the game stream source). Spectators do not provide input for the gaming application, but are solely observers of the game state. Such spectators may choose to render the stream as a single view for 2D displays, multiple views for autostereoscopic viewing, or holographic data for a spectator light field display. All these rendering methods could be offered in exchange for payment from a digital rights management system, as a per-game, per-game series, a subscription model for a period of time, or some similar arrangement. In another embodiment, the stream may be sent to a streaming service (e.g. Twitch) viewable to other viewers.

Gaming servers554in the LF gaming network550may support gaming applications with both two-dimensional and/or four-dimensional representations of a game state. That is, a gaming server may render a scene in a game for a traditional 2D video output, or may render a scene for a 4D light field output. The 4D light field outputs may be associated with additional energy domain content available for light field displays (e.g., tactile surfaces). The different types of representations may employ digital rights management systems and exchange content for payment as described herein.

eSports in a Gaming Network

A LF gaming network may be configured to support electronic sports (“eSports”). eSports is a form of competition using video games, taking the form of organized, multiplayer video game competitions, particularly between professional players, individually or as teams. eSports tournaments typically occur at a physical location and occur in front of a live audience. For example, an eSports match may occur at a repurposed basketball arena where the players are employing computer systems on the court, and the video game is presented on a large display surface in the arena.

Typically, during an eSports match a dedicated spectator which in this disclosure will be referred to as the observer has an instance of the game running locally in the arena. The observer may move about within the game and present his viewpoint to a large monitor in the arena such that the audience in attendance may watch players play the game. This observer may also stream her viewpoint online to a much wider audience via a network. The observers may employ any of the spectating modes described hereinabove.

An eSports arena may be configured to display holographic video game content using a LF display system. The LF display system can be configured in multiple ways. For example, the surface of the LF display assembly510of the LF display system500may be the floor of an arena. Other example configurations include the horizontal surface of a stage, a wall-mounted vertical display visible within the arena, a curved display surface, etc. In all of these circumstances, the display system presents the holographic video game content such that it is viewable by viewers in the arena. The LF display system may have a variety of geometries as described herein. For example, the LF display system may include a central display, a central display with side displays angled toward the audience, a display surface curved toward or around the audience, or some other variation of display surface topology.

Within the arena, an instance of a gaming application executes gaming data and provides the data to an LF processing engine530. The LF processing engine renders a scene within the game and presents the rendered scene as holographic video game content within the holographic object volume of the LF display system in the arena. The holographic video game data may be generated taking into account the hardware configuration of the arena display assembly, the viewing volume geometries, etc. as described herein. The LF processing engine may generate a viewpoint from an observer, any of the players, or any other instance of the game engine which is being updated with game data in real time.

The eSports event may be viewed remotely as well. For example, a remote observer may execute an instance of a gaming application. Data representing the eSports even may be received via a network and input to a LF display system at the remote location. The LF display system at the remote location includes an LF processing engine that renders and presents holographic video game data of the eSports event. Similarly to above, local observers may be able to manage the presented viewpoint at the remote location. It is also possible that an observer's viewpoint may be simulcast directly to LF display systems which are not running an instance of the game. For example, the LF display system of one of the players or of the observer within the eSports arena may compress the game data into a vectorized data real time, and stream this data via a network to one or more remote clients with a LF display system. The remote clients could decode the vectorized data and project the holographic content in a method similar to that already described for the remote rendering architecture. A pay-per-view structure may be employed to generate revenue from viewing such events.

Light Field Display System for Video Games

FIGS.6A-6Billustrate LF gaming system built into rooms, in accordance with some example embodiments. The room (e.g., room600A and room600B) includes at least a first wall602and a second wall604and may be the living room of a viewer610. Here, the room includes a LF display system, with an array of LF display modules (“LF array”)620covering the wall602and the wall604. While the illustrated LF display system is built into two of the walls, it could be built into any other aspects of the room600. The LF array620is configured to present holographic video game content (e.g., holographic object606and608) to viewers (e.g., viewer610) in the room. In the illustrated example, the viewing locations are anywhere within the room600but could also be other locations. InFIG.6, the LF array620is an embodiment of the LF display assembly510. Further, the LF display system which is comprised of the LF display assembly510is an embodiment of the LF display system500.

The LF array620presents holographic video game content of video game data generated by a gaming application. A viewer610in the room may be able to perceive and interact with the presented holographic content. In this example, video game data from a network system556is received by a gaming application which converts the video game data to holographic video game content, and presents the holographic content to the controller520of the LF display system500for display to one or more viewers (e.g.610) within the room. The video game application generates a game world with a variety of in-game objects, such as game characters, items, and events (e.g., explosions). The video game application also generates inputs that are transmitted to the network system556, and the game application receives game data for the game world.

Here, the LF array presents holographic game content depicting a portion of the game world. The holographic game content includes holographic objects depicting in-game objects in the game world at corresponding real-world positions in a holographic object volume. The holographic object volume can extend in front of the display surface (e.g., in front of the walls) into the room and can also extend behind the display surface (e.g., behind the walls).

In the room600A ofFIG.6A, the holographic game content includes holographic objects606and holographic object608in the portion of the holographic game content presented to the user. The two holographic objects depict different in-game objects in the game world. A first holographic object606depicts an alien standing in front of the viewer610, and a second holographic object608depicts an explosion taking place to the left of the viewer610. Although not shown inFIG.6A, the holographic game content can also include holographic objects in the portion of the holographic object volume that extends behind the display surface of the LF array620. For example, the holographic game content may make it appear like the user is standing on the surface of Mars, with holographic objects depicting far-away rocks and other in-game objects on the surface displayed behind the display surface.

The viewer610stands in the room600A and can view the holographic game content. While the holographic game content is being presented, a tracking system of the LF display system may track a real-world position of one or more of the user's body parts. For example, the LF display system tracks the user's hand, arm, or torso, etc. and determines a corresponding position in the game world for each tracked body part. In some embodiments, the tracking system includes one or more 2D cameras, a light field camera, one or more depth sensors, or a combination of these. The gaming application may be configured to accept this body part position data as user inputs, allowing the game world to be updated with the tracking information acquired by the LF display system.

As described in more detail above, the LF display system can track one or more body parts of the viewer610. For example, the LF display system may track the position of the user's upper body, the eyes of a viewer610, or the hands of a viewer610. The tracked positions can be used to represent the overall position, or particular positions of body parts of the user within the room. The LF display system tracks the positions of the one or more body parts and updates the corresponding positions in the game world presented as holographic content in the room600A.

The LF display system can provide the viewer with sensory cues for interactions that occur during execution of the gaming application. Tracking data can be used augment or determine the occurrence of these interactions. For example, if the viewer is participating in a war game, an in-game interaction occurs when a corresponding position in the game world (i.e., the position corresponding to the viewer610) enters a volume of space occupied by a game object. For example, if the viewer's hand is being tracked, an in-game interaction takes place when the viewer610reaches forward so that his hand enters the volume of space occupied by the holographic object depicting the alien606. The position of the viewer's hand may be tracked by the tracking module of the LF display system within room600A, and provided as user input to the gaming application. In response, the gaming application may generate holographic video game data for more than one energy domain. For example, the holographic video game data may be processed by the LF display system resulting in both a visual cue (e.g., displaying the holographic alien reacting to the touch) and tactile feedback (e.g., a volumetric tactile surface representing the alien's skin). This provides the viewer610with both the tactile sensation and visual feedback of touching the alien, which allows for an immersive experience that provides the illusion that the user is interacting with a real alien.

As another example, an in-game interaction takes place when a moving in-game object in the game world collides with a position corresponding to a tracked body part. For example, if the user's torso is being tracked, and this tracked body position is provided as a user input to the gaming application, collision detection logic within the gaming application may detect that an in-game interaction takes place when a moving object collides with the user's torso. The gaming application may generate multi-sensory holographic content to the controller of the LF display system of room600A, which in turn provides a combination of a visual cue (e.g., display a holographic object bounces off the user's torso) and tactile feedback (e.g., generation of a volumetric tactile surface on the user's torso) to provide the user with both the physical sensation and visual feedback of being hit by the moving object.

In some examples, an in-game interaction can also take place if an event in the game world occurs within a threshold distance of the position corresponding to a tracked body part. Like in the previous examples, the LF display system can provide a combination of a visual cue and tactile feedback when the gaming application generates multisensory holographic data to indicate that an in-game interaction occurs. For instance, in the example shown inFIG.6A, an explosion (a type of event) is taking place within a threshold distance of the position corresponding to the user. For the explosion event shown inFIG.6A, the LF display system displays a holographic object608depicting the explosion and also presents a tactile feedback (e.g., tactile pressure wave612) to simulate the physical sensation of being hit by the blast wave of the explosion. The game application can determine the magnitude, position, directions, etc. of tactile feedback for in-game events based on the tracked position of the viewer610, the tracked movement of viewer610, or any of the information described herein.

In the room600B ofFIG.6B, the LF display system is presenting a holographic user interface614to a viewer610. Here, the LF display system tracks a position of the viewer610and displays the holographic UI614at a position corresponding to the tracked position of the viewer. The LF display system may present the holographic UI at a position within a threshold distance from the viewer610. For example, the threshold distance may correspond to the length of an average adult's arms so that the holographic UI614is projected within arm's reach of the viewer610. The LF display system continues to track the position of the viewer610as the viewer610moves around the room, and the LF display system moves the holographic UI614accordingly so that the holographic UI614is displayed at the same position relative to the viewer610. In some embodiments, LF display system may also track the viewer selection of interface elements of the UI614, and provide this tracked data as input to a game application.

The holographic UI614may depict an interface that includes one or more user interface elements that the viewer610can select. The UI elements can control various functions with the game application, such as accessing menus or selecting items in the possession of the user's in-game character. In other embodiments, the holographic user interface is part of a non-game application, such as an internet browser, and the UI elements control various functions of the application.

The LF display system may track the position of body parts of the viewer610(e.g., one or both of the viewer's hands), such that the viewer610may interact with the holographic UI614. For example, the gaming application may use the position of a user's hand provided by the tracking module526of the LF display system to determine that a user's hand is in the same position of a UI element and, subsequently, determine that the viewer610is interacting with that element. In response, the gaming application may generate appropriate holographic video game content as well as tactile content to convey to the user that the user has successfully selected the UI element. For example, the LF display system may be directed to change the color of the selected UI element and project an ultrasonic wave toward the user's hand when the viewer interacts with a specific portion of the holographic UI614. This combination of visual and tactile feedback provides for a more immersive experience that provides the impression that the user is touching and interacting with a real-world touch surface.

FIG.7illustrates a table-top LF gaming system, in accordance with one or more embodiments. Here, the table-top LF gaming system700includes a LF display system built into a table708, with an array of LF display modules (“LF array”)720on the top surface702of the table708. The LF array720is configured to present holographic video game content generated by a gaming application to viewers in viewing locations that can view the top surface702of the table708. InFIG.7, the LF array720is an embodiment of the LF display assembly510. Further, the LF display system is an embodiment of the LF display system500.

The LF array720presents holographic video game content generated by a game application such that it appears on the top surface702of the table708. The video game content includes a holographic object704representing a button. A viewer may be able to perceive and interact with the presented holographic object704. For example, a viewer can use their hand706to interact with holographic object704(e.g., by pressing the button), and, subsequently, the LF display system may track the motion of the hand, and provide the position of the hand to the video game application. The video game application may determine that the button is pressed, and as a result generate updated holographic video game data for display (e.g., an explosion occurs).

Additionally, the LF display system can be configured to receive other types of interactions from the viewer (e.g., auditory cues, visual cues, etc.) and change the holographic video game content presented to the user in response. As an illustration, a viewer at the table708can state “pause game,” the LF display system records the audio, recognizes the audio, and pauses the gaming application in response. Similarly, a viewer at the table708can interact with the holographic content to rotate the view of the game world displayed by the LF array720. As an illustration, a viewer can use their hands create a gesture that rotates the view of the in-game world displayed by the LF array720. To do so, the LF display system captures an image sequence of the viewer making the gesture, recognizes the gesture, and rotates the holographic content in response.

FIG.8is a flow chart illustrating a method for implementing a holographic video game employing a LF display system, in accordance with one or more embodiments. That is, the method800is performed by a LF display system (e.g., LF display system500). The process800may include additional or fewer steps and the steps may occur in a different order. Further, various steps, or combinations of steps, can be repeated any number of times during execution of the method.

The LF display system tracks810a real-world position of one or more of a viewer's body parts. For example, in the room-sized system described above with respect toFIGS.6A and6B, the tracking system of the LF display system may track multiple body parts of the viewer (e.g., the viewer's head, hands, and torso).

The LF display system determines820an in-game position in the game world corresponding to the real-world position of the tracked body part. For example, the controller of the LF display system maintains a mapping of real-world three-dimensional positions to in-game three-dimensional positions in the game world and uses the mapping to identify an in-game position in the game world corresponding to the real-world position of the tracked body part.

The LF display system provides the game-world position of the tracked body part to the game application as user input830. The gaming application may determine that an in-game interaction occurred at the in-game position in the game world. As described above with respect toFIGS.6A and6B, in-game interactions can take place when the in-game position corresponding to the tracked body part crosses into a volume occupied by an in-game object in the game world (e.g., when the viewer “touches” the alien inFIG.6A). An in-game interaction can also take place when an event in the game world within a threshold distance of the position corresponding to the tracked body part (e.g., the viewer being within the blast radius of the explosion inFIG.6A). In response to determining that an in-game interaction has occurred, the gaming application may provide holographic content for one or more energy domains to the controller of the LF display system. The in-game interaction is represented by some combination of visual, haptic, and other sensory feedback to convey to the viewer that the interaction is taking place and provides the appropriate multi-sensory feedback.

The controller of the LF display system receives840the holographic video game data for the visual, haptic, and other sensory domains portraying the in-game interaction. For example, the holographic content portraying the in-game interaction can depict an in-game object (e.g., an alien) reacting to being touched or an in-game event (e.g., an explosion) happening within a threshold distance of the viewer's position.

The controller generates850instructions for the presentation of holographic content for one or more energy domains portraying the in-game interaction. The LF display system displays850the holographic content representing the in-game interaction. If the feedback includes multi-sensory feedback, the LF display systems takes appropriate actions to provide850the multi-sensory feedback. For example, the LF display system may generate a tactile interface, play a sound, or generate a light field representing the in-game interaction. Providing the multi-sensory feedback yields an immersive viewer experience that creates the illusion that the viewer is interacting with real objects. The controller generates the instructions for the display of visual, tactile, and other sensory domains for the particular hardware configuration of the LF display assembly510, as described above, as well as the hardware configuration of the sensory feedback system570(e.g. a geometry of the ultrasonic transducer configuration for generating tactile surfaces).

Light Field Display System for Electronic Sports

FIG.9illustrates a cross-section of an arena including a LF display system for displaying holographic video game content, in accordance with one or more embodiments. InFIG.9, the arena900is configured to display holographic video game content930as part of an eSports competition. InFIG.9, the LF display system940is an embodiment of the LF display system500. The holographic video game content is generated by a gaming application and presented by the LF display system940. The LF display system940can be used to display holographic game content for any type of video game that is commonly played in eSports matches, such as sports simulation games (e.g., MADDEN NFL), real-time strategy games (e.g., STARCRAFT), battle arena games (e.g., DOTA 2, LEAGUE OF LEGENDS), or shooter games (e.g., CALL OF DUTY).

In the illustrated example, the arena900includes a display stage configured to represent the battlefield of a strategy game during an eSports match. The stage is surrounded by viewing locations922positioned in a manner similar to other large arenas. The top-surface of the stage902is covered with an array of LF display modules such that the area above the stage forms the game display area910, which is the holographic object volume for the LF display system940. It should be noted that while the game display area is shown only above the stage902, the game display area also extends beneath the stage902, similar to the holographic object volume255shown inFIG.2A. The LF display system940presents holographic video game content930in the gaming display area910such that viewers in the arena900may perceive the holographic video game content930. In the arena900, the viewing locations922are positioned with a rake such that the sightline for each viewing location allows unobstructed viewing of sporting content930from a viewing volume (e.g., viewing volume920A). Here, the arena900includes one viewing volume which surrounds the stage902, including viewing volumes920A and920B, such that all the viewers are presented with the same holographic video game content. In other configurations there may be more than one viewing volume.

More generally, the LF display system940may have a display surface that is substantially horizontal, or approximately horizontal. In several examples, the LF display system may include a display surface that is (i) at least some part of the floor of the arena, (ii) at least some part of a stage, (iii) a wall in an arena, (iv) an elevated display board common in many arenas, or (v) at least some portion of a raised viewing platform in an arena. Other types of horizontal surfaces are also possible. In the illustrated configuration, the viewers are elevated relative to the display surface and look downward to view the holographic video game content that is projected from the display surface, and the viewers may partially or fully surround the display surface. There are many other configurations for a light field display surface, including a vertically-mounted display surface with viewing locations that are arrayed approximately in front of the LF display surface and described elsewhere in this disclosure (450A shown inFIG.4C,450Bshown inFIG.4D, and450Cshown inFIG.4E), as well as a curved LF display surface.

The game display area910is illustrated as a bounded square for clarity, but the game display area910may only be a portion of the actual game display area where holographic video game content930may be projected. For example, the game display area910may extend further towards the top of the arena900and/or into the stage902, as previously mentioned. Additionally, a portion of the game display area910and viewing volume920(e.g.920A and920B) may overlap spatially. While illustrated as a partial overlap, the game display area910and viewing volume920may wholly overlap spatially. The spatial overlap are areas in which viewers may interact with holographic video game content930as previously described.

Also illustrated are a first control terminal910and a second control terminal912receiving inputs from a first player906and a second player904, respectively, participating in the eSports competition. The control terminals are each executing a gaming application that receives inputs from a player which change the player positions, exercise player options such as activating shields or providing special powers, or cause events such as firing a weapon. The player inputs are used by the video game application to update the game state, and generate the holographic video game data presented to in the game display area. In some examples, the inputs from each player controls a respective character in the game world during execution of the gaming application. In other words, the control input from the first player906controls a first character in the game display area910, and the control input from the second player904controls a second character in the game display area. In some embodiments, the control input from each user can control multiple characters. For example, if the game application is a strategy game, the control input received from each user can control multiple characters in each player's respective army.

The control input can move characters in the game display area910to different positions. The control input can also control the character to perform in-game actions such as firing a weapon, picking up an object, throwing an object, or performing a melee attack. Apart from controlling a character, the control input can also indicate selections of menu options or other UI elements, such as selections in a play-calling interface of a football simulation game. In the illustrated example, control inputs are received from control terminals operated by the two players. Each control terminal is a device capable of receiving a user input from a user and sending the input to a gaming application, which generates holographic video game data to be displayed on the LF display system940.

Generally, the control terminals are computing devices. For instance, in the embodiment shown inFIG.9, the control terminals are desktop computers and the players provide user inputs via a keyboard, a mouse, a joystick, a wheel, a gaming console with multiple controls, or some other input device. In other examples, the control terminals may be other computing devices that may include, for example, a portable computer, a touchscreen, a handheld device, a holographic UI (such as the UI described with respect toFIG.6B), some other input device, or some combination thereof.

The control terminals may also incorporate some or all of the components of a LF display system. For instance, a control terminal may be implemented as the room-sized LF gaming system described above with respect toFIGS.6A-6B, in which case the control terminal receives user inputs by tracking body parts of the user in the manner described above, either by tracking the user's hands to allow the gaming application to detect interactions with a holographic UI (as described with respect toFIG.6B), by tracking one or more parts of the user's body to detect interactions with holographic objects (as described with respect toFIG.6A), or some combination of the two.

Each of the control terminals displays game content portraying a region of the game world. In one embodiment, the user of a control terminal can control the region being displayed. For example, if the game application is a real-time strategy game or a battle arena game, the user can change the displayed region of the game world by interacting with a map interface or moving the mouse cursor to the edge of the screen. In embodiments where the control terminals are implemented as computing devices, the region is displayed on a display of the computing device. In embodiments where the control terminals are implemented with components of a LF display system, the region may displayed as holographic content.

In some examples, some portion of the game content displayed on a control terminal (e.g., control terminal912) is displayed as holographic video game content930in the game display area910. Thus, in the example ofFIG.9, the LF display system940displays holographic video game content corresponding to a viewpoint offered on one of the control terminals. In this manner, viewers in the arena may also perceive the game content, views, content, etc. that a player (e.g., player904) is interacting with at a control terminal (e.g., control terminal912). Additionally, the presented holographic video game content930can include holographic content not seen or interacted with by the players. For example, the holographic video game content may include objects depicting other virtual objects (e.g., buildings, surroundings, etc.), events in the game world (e.g., explosions, mission status, etc.), advertisements, overlays, commentaries, player statistics, etc.

In an embodiment, a third user (e.g., an observing user who does not control any characters in the game application) at a third control terminal generates the holographic video game content930that is displayed in the game display area910. To illustrate, the third control terminal (not pictured) may be running a video game application which is updated with the game data and presents a third user (not pictured) with a user interface to select the holographic video game content that is displayed by the LF display system. The third user may only be a spectator who must select a viewpoint within the video game from which the holographic video is rendered, but who may not be allowed to enter user inputs to the game and may not be allowed to change the game state. This may allow the third user to select and display regions of the game world where major in-game events, such as large battles, are taking place. In other examples, the third user may be able to select between player viewpoints, display overlays, augment holographic video game content930with additional sensory content, and generally increase the viewing experience for viewers in the arena900. The spectator in some cases might have “admin” rights to the server and could use these powers to change the game state if something goes wrong in the game (e.g. a player becomes stuck somewhere or some other bug). In some scenarios there might be both an admin spectator and an observer spectator. In some scenarios (e.g. non-competitive esports) the admin, whether s/he is the main spectator or not, could change the game state based on audience input.

In some embodiments, a control terminal (e.g., control terminal910) captures an image of its respective player (e.g., player906) and a representation of the player is displayed as part of the holographic video game content930. This provides viewers in the arena900with a view of the player during the eSports match such that viewers can view the facial expressions and body language of the player. In embodiments where a control terminal captures and displays video of a player, viewers in the viewing volume920A,920B may view the physical gestures being performed by the player to interact with the game, which may enhance the enjoyment of the viewers.

The LF display system940can also include a sensory feedback assembly that provides tactile feedback or other forms of feedback to viewers in the arena900. For example, if an explosion or impact takes place in the game world, the LF display system may create an ultrasonic pressure wave and project the wave towards viewers in the arena900such that the viewers may experience the mechanical energy of the “explosion.” In some examples, this occurs in regions where the game display area910overlaps with the viewing volume920.

In an embodiment, the LF display system is configured to transmit information via a network (e.g., as a content stream). In various configurations, the information may be video game data, holographic video game content, compressed representations of video game data or holographic video game content, etc. The information transmitted over the network can be received by other LF display systems such that the holographic video game content may displayed at another location. This allows the eSports match to be viewed by viewers who are not present in the arena900. An example of a display system that receives streaming holographic video game content from a network is described below with respect toFIG.10.

FIG.10illustrates a LF display system built into a table, in accordance with an example embodiment. The LF display system1040includes a LF display assembly comprised of an array of LF display modules (“LF array”1020) on the top surface of the table1004. While, as illustrated, the LF display1040system is built into a table1004, it could be built into other flat surfaces (e.g., a floor, a wall, etc.). The LF array1020is configured to present holographic video game content1030to viewers in viewing locations that can view the top surface of the table1004. In the illustrated example, viewers1032are in viewing locations surrounding the table1004such that each viewer1032can perceive holographic video game content1030presented by the LF array1020. As previously described, viewers in other viewing locations may also perceive the holographic video game content1030presented by the LF array1020. InFIG.10, the LF array1020is an embodiment of the LF display assembly510. Further, the LF display system1040is an embodiment of the LF display system500.

The LF array1020presents holographic video game content1030of a tennis simulation game such that it appears on the top surface of the table1004. A viewer1032may be able to perceive and interact with the presented holographic content1030. In this example, the LF display system inputs video game data via a network, converts the video game data to holographic video game content, and presents the holographic video game content1030on the top surface of the table1004.

In some embodiments, a viewer1032can interact with the LF display system1040to change the presented holographic video game content1030. For example, the LF display system can be configured to receive auditory cues, visual cues, etc. and change the holographic video game content1030in response. As an illustration, a viewer1032may state “Show bird's eye view,” the LF display system records the audio, recognizes the audio, and changes the viewpoint of the displayed holographic video game content1030in response. Similarly, a viewer1032can interact with the holographic content1030to rotate the view displayed by the table. As an illustration, a viewer can touch a player in the presented holographic video game content1030with one hand and make a rotation gesture with the other hand. In this case, the LF display system captures an image of the viewer making the gesture, recognizes the gesture, and rotates the holographic content in response.

In some configurations, the LF display system1040can additionally or alternatively present holographic video game content from a streaming service implemented on a server (e.g., TWITCH, YOUTUBE). For example, the LF display system1040can present a user interface that allows users to select between different items of prerecorded holographic game content (e.g., prerecorded eSports matches, other prerecorded games) and/or different live streams of holographic game content (e.g., a live eSports match, live streams of a video game streamer). Upon receiving a user selection of an item or channel of holographic game content, the LF display system1040sends a request for the selected holographic game content to the streaming service, the streaming service sends holographic video game content to the LF display system1040, and the LF display system1040presents the holographic game content.

FIG.11is a flow chart illustrating a process for implementing a holographic video gaming system, in accordance with one or more embodiments. The process1100is performed by a holographic video gaming system comprised of a gaming application which generates holographic video game data that is displayed by a LF display system (e.g., the LF display system500). The process1100may include additional or fewer steps and the steps may occur in a different order. Further, various steps, or combinations of steps, can be repeated any number of times during execution of the method.

A holographic video gaming system executes a gaming application to produce video game data for a light field game experience. The gaming application may be executing on a LF display system (e.g., LF display system500) or some other computational device configured to execute the gaming application.

During execution of the gaming application, a user produces a variety of control inputs that can influence a game state, and the holographic video gaming system receives1110the control inputs. The control inputs may be produced by the user interacting with an interface of the LF display system. For example, a user may click his mouse to cause a character in the game to move. Additionally, the control inputs may be produced by the LF display system interpreting sounds, motions, gestures, etc. generated by the user. For example, a user may make a thrust her hand, and the position of the hand and the gesture are captured by the tracking system580and analyzed by tracking module526within the LF display system. This tracked hand motion may be input to the gaming application, which makes a character in the gaming application thrust her sword.

The gaming application generates1120holographic video game data using the received control inputs. For example, the gaming application may receive and interpret the mouse clicks and generate video game data representing the movement of the character. Similarly, the gaming application may receive and interpret the motions corresponding to thrust and cause a character to thrust her sword. In some situations, the control inputs may be received via a network and the holographic video game data is generated in response. The LF display system receives the holographic video game data.

The LF display of the holographic video gaming system generates1130display instructions for the holographic video game content using the holographic video game data. For example, the LF display system may generate display instructions corresponding to a character's movement, or any other scene, for a LF display assembly. The display instructions may include instructions for the projection of more than one type of energy. For example, the display instructions may include instructions for projection of visual holographic objects, as well as instructions for the projection of volumetric tactile surfaces nearby or coincident with the holographic objects.

The LF display presents1140the holographic video game data. Continuing with the example above, the display instructions, when executed, present the video game data as holographic video game content of the character moving. In many examples, the video game data and holographic video game content may be generated by a holographic video gaming system located at the same location. However, in some situations, the video game data and holographic video game content may be generated in different locations. For example, the video game data may be generated at a remote server and subsequently transmitted to a location with a gaming application and a LF display system via a network. The gaming application may generate holographic video game data which is provided directly to the LF display system and displayed to a player and/or one or more viewers.

The holographic video gaming system presents the holographic video game content using an LF display system. In various configurations, the holographic video game content may be received in a first format, and converted into a second format before presentation. Further the holographic video game content may be modified such that it is suitable for the hardware configuration (e.g. the game display area) of the LF display system presenting the holographic video game content. For example, the holographic video game data may be converted for presentation at a lower resolution, in smaller spaces, with additional sensory content, etc. based on the configuration of the LF display system presenting the holographic video game content.

To expand, the holographic video gaming system continuously generates new video game data based on control inputs, and, subsequently, generates holographic video game data representing the video game data, and presents the holographic video game data. The holographic video game data may include, for example, scene geometry, viewpoints, textures, lighting, and other scene information used to render light fields that responds to those user control inputs without a perceptible delay. For example, the holographic video gaming system generates holographic video game content for a region of a game world. The game world may be fully interactive such that the user can interact with the game world, and those interactions act as control inputs. The interactive control inputs, subsequently, can change the game world presented to the user.

Additional Configuration Information

The foregoing description of the embodiments of the disclosure has been presented for the purpose of illustration; it is not intended to be exhaustive or to limit the disclosure to the precise forms disclosed. Persons skilled in the relevant art can appreciate that many modifications and variations are possible in light of the above disclosure.

Some portions of this description describe the embodiments of the disclosure in terms of algorithms and symbolic representations of operations on information. These algorithmic descriptions and representations are commonly used by those skilled in the data processing arts to convey the substance of their work effectively to others skilled in the art. These operations, while described functionally, computationally, or logically, are understood to be implemented by computer programs or equivalent electrical circuits, microcode, or the like. Furthermore, it has also proven convenient at times, to refer to these arrangements of operations as modules, without loss of generality. The described operations and their associated modules may be embodied in software, firmware, hardware, or any combinations thereof.

Any of the steps, operations, or processes described herein may be performed or implemented with one or more hardware or software modules, alone or in combination with other devices. In one embodiment, a software module is implemented with a computer program product comprising a computer-readable medium containing computer program code, which can be executed by a computer processor for performing any or all of the steps, operations, or processes described.

Embodiments of the disclosure may also relate to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, and/or it may comprise a general-purpose computing device selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a non-transitory, tangible computer readable storage medium, or any type of media suitable for storing electronic instructions, which may be coupled to a computer system bus. Furthermore, any computing systems referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability.

Embodiments of the disclosure may also relate to a product that is produced by a computing process described herein. Such a product may comprise information resulting from a computing process, where the information is stored on a non-transitory, tangible computer readable storage medium and may include any embodiment of a computer program product or other data combination described herein.

Finally, the language used in the specification has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or circumscribe the inventive subject matter. It is therefore intended that the scope of the disclosure be limited not by this detailed description, but rather by any claims that issue on an application based hereon. Accordingly, the disclosure of the embodiments is intended to be illustrative, but not limiting, of the scope of the disclosure, which is set forth in the following claims.

Claims

  1. A gaming system comprising: a gaming application configured to produce holographic video game data for a light field game experience during execution of the gaming application;a light field (LF) display assembly configured to present holographic content;and a light field processing engine configured to: receive the holographic video game data;and generate display instructions for the light field display assembly to present the holographic video game data as holographic content using the LF display assembly, the holographic content representing a scene within the light field game experience, the LF processing engine is further configured to determine a hardware configuration of the LF display assembly, and the LF processing engine is further configured to generate holographic content representing the scene based on the hardware configuration of the LF display assembly.
  1. The gaming system of claim 1, wherein: the LF processing engine is further configured to determine a geometric orientation of the LF display assembly;and the LF processing engine is further configured to generate holographic content representing the scene based on the geometric orientation geometric orientation of the LF display system.
  2. The gaming system of claim 2, wherein the geometric orientation includes any of: a number of display panels of the LF display assembly, a relative orientation of the display panels, a height of the display panels, a width of the display panels, and a layout of the display panels.
  3. The gaming system of claim 1, wherein the holographic video game data for the LF game experience is rendered as any of: one or more viewpoints within the scene of the light field game experience, a light field representing the light field game experience, a deep image comprising multiple red, green, blue, and alpha samples, a reflectance determined by a function of illumination geometry and viewing geometry for one or more surfaces in the light field game experience.
  4. The gaming system of claim 1, wherein the holographic video game data for the light field game experience is rendered as one or more viewpoints within the scene of the light field game experience and a first viewpoint of the one or more viewpoints corresponds to a center view having a larger resolution than the other viewpoints.
  5. The gaming system of claim 1, wherein the holographic video game data is rendered as one or more viewpoints within a scene of the light field game experience and at least some of the viewpoints are rendered at lower resolution than other viewpoints.
  6. The gaming system of claim 1, wherein the holographic video game data is rendered as one or more volumetric tactile surfaces, the holographic video game data describing the volumetric tactile surfaces as any of: a surface texture, a material hardness, a ductility, or a tactile strength.
  7. The gaming system of claim 1, wherein the holographic video game data is rendered as one or more volumetric tactile surfaces, the holographic video game data describing the volumetric tactile surfaces as a sound field comprising one or more amplitudes for a sound wave generated at one or more coordinates.
  8. The gaming system of claim 1, further comprising: an encoder to encode the holographic video game data to encoded video game data;and a decoder configured to decode encoded holographic video game data when generating the display instructions for the light field display assembly.
  9. The gaming system of claim 9, wherein the encoded holographic video game data is in a first format and the decoded holographic video game data is a second format.
  10. The gaming system of claim 10, wherein the first format is a vectorized data format and the second format is a rasterized data format.
  11. The gaming system of claim 1, further comprising: a network interface configured to receive global game information from a gaming server via a network connection, and wherein the gaming application is further configured to produce holographic video game data using the global game data received from the server.
  12. The gaming system of claim 12, wherein the gaming server is located at a same location as the gaming system.
  13. The gaming system of claim 12, wherein the gaming server is located at a different location as the gaming system.
  14. The gaming system of claim 1, wherein a gaming server administers the light field game experience via a network connection and wherein the light field game experience comprises any of: collecting inputs from clients, collecting timestamps from the clients, keeping the world state of the game synchronized between players, making sure all player clients are playing by the rules, verifying and enforcing player movement, or sending regular snapshots of the world state of the game to all clients.
  15. The gaming system of claim 1, further comprising: a network interface configured to receive at least some portion of the holographic video game data from a gaming server via a network connection, and wherein the holographic video game content received from the server is in a first data format, and the gaming system further comprises: a decoder configured to decode the holographic video game content in the first data format to a second data format, the second data format configured for generating display instructions for a LF display assembly.
  16. The gaming system of claim 1, wherein the gaming application is configured to generate video game data at a service level of a plurality of service levels, each of the service levels corresponding to a different quality holographic content presented by the LF display assembly.
  17. The gaming system of claim 17, wherein each service level is protected by a corresponding digital rights management protection scheme.
  18. The gaming system of claim 17, wherein a first service level includes holographic content comprising visual information, and a second service level includes holographic content comprising visual information and tactile information.
  19. The gaming system of claim 1, wherein the presented holographic content includes a first type of energy and a second type of energy.
  20. The gaming system of claim 20, wherein the first type of energy is electromagnetic energy and the second type of energy is ultrasonic energy.
  21. The gaming system of claim 21, wherein the ultrasonic energy forms a volumetric tactile surface.
  22. The gaming system of claim 20, wherein the first type of energy and second type of energy are presented at the same location such that the LF display assembly presents a volumetric tactile surface at or near to holographic objects.
  23. The gaming system of claim 1, further comprising: a light field recording assembly comprising one or more energy sensors configured to record electromagnetic energy as light field content.
  24. The gaming system of claim 24, wherein the holographic video game data includes the recorded light field content.
  25. The gaming system of claim 25, wherein the gaming application produces holographic video game content based on the recorded light field content.
  26. The gaming system of claim 1, further comprising: a light field recording assembly comprising one or more pressure sensors configured to record mechanical energy.
  27. The gaming system of claim 27, wherein the holographic video game data includes the recorded mechanical energy.
  28. The gaming system of claim 27, wherein the gaming application produces holographic video game content based on the recorded mechanical energy.
  29. A gaming system comprising: a gaming application configured to generate a plurality of inputs for a light field game experience during execution of the gaming application;a network interface configured to: transmit the plurality of inputs to a gaming server via a network connection, and receive, via the network connection, holographic video game content comprising a plurality of display instructions for presenting the holographic video game content of the light field game experience;and a light field (LF) display assembly configured to input the display instructions and present the holographic video game content as a scene within the light field gaming experience, wherein the gaming system is further configured to access a hardware configuration of the LF display assembly, the network interface is further configured to transmit the hardware configuration of the LF display assembly to the gaming server, and the received holographic content representing the scene is configured for presentation using a LF display assembly having the hardware configuration.
  30. The gaming system of claim 30, wherein: the gaming system is further configured to access a geometric orientation of the LF display assembly, and the network interface is further configured to transmit the geometric configuration of the LF display assembly to the gaming server, and the received holographic content representing the scene is configured for presentation using an LF display assembly having the geometric orientation.
  31. The gaming system of claim 30, wherein: the gaming system is further configured to access a configuration of a location of the LF display assembly, the network interface is further configured to transmit the configuration of the location of the LF display assembly to the gaming server, and the received holographic content representing the scene is configured for presentation using a LF display system in the location with the configuration.
  32. The gaming system of claim 30, wherein the network interface is further configured to transmit holographic video game data to one or more additional LF display assemblies via the network connection.
  33. The gaming system of claim 33, wherein the one or more additional LF display assemblies are at a different location than the LF display assembly.
  34. The gaming system of claim 30, wherein the network interface is further configure to transmit, via the network connection, holographic video game data to the LF display assembly configured to present the holographic video game data as holographic content to an audience of an eSports competition.
  35. The gaming system of claim 35, wherein the audience and LF display assembly are located at a different location than the gaming system.
  36. The gaming system of claim 30, further comprising: a light field recording assembly comprising one or more pressure sensors configured to record mechanical energy.
  37. The gaming system of claim 37, wherein the network interface is further configured to transmit a representation of the recorded mechanical energy to a video game server.
  38. The gaming system of claim 38 wherein the received holographic video game content is based on the representation of the recorded mechanical energy transmitted to the gaming server.
  39. A gaming system comprising: a gaming application configured to produce holographic video game data for a light field game experience during execution of the gaming application;a light field (LF) display assembly configured to present holographic content;and a light field processing engine configured to: receive the holographic video game data;and generate display instructions for the light field display assembly to present the holographic video game data as holographic content using the LF display assembly, the holographic content representing a scene within the light field game experience, wherein the holographic video game data is rendered as one or more volumetric tactile surfaces, the holographic video game data describing the volumetric tactile surfaces as any of: a surface texture, a material hardness, a ductility, or a tactile strength.
  40. A gaming system comprising: a gaming application configured to produce holographic video game data for a light field game experience during execution of the gaming application;a light field (LF) display assembly configured to present holographic content;and a light field processing engine configured to: receive the holographic video game data;and generate display instructions for the light field display assembly to present the holographic video game data as holographic content using the LF display assembly, the holographic content representing a scene within the light field game experience, wherein the gaming application is configured to generate video game data at a service level of a plurality of service levels, each of the service levels corresponding to a different quality holographic content presented by the LF display assembly.
  41. The gaming system of claim 41, wherein each service level is protected by a corresponding digital rights management protection scheme.
  42. The gaming system of claim 41, wherein a first service level includes holographic content comprising visual information, and a second service level includes holographic content comprising visual information and tactile information.
  43. A gaming system comprising: a gaming application configured to produce holographic video game data for a light field game experience during execution of the gaming application;a light field (LF) display assembly configured to present holographic content;and a light field processing engine configured to: receive the holographic video game data;and generate display instructions for the light field display assembly to present the holographic video game data as holographic content using the LF display assembly, the holographic content representing a scene within the light field game experience, wherein the presented holographic content includes a first type of energy and a second type of energy.
  44. The gaming system of claim 44, wherein the first type of energy is electromagnetic energy and the second type of energy is ultrasonic energy.
  45. The gaming system of claim 45, wherein the ultrasonic energy forms a volumetric tactile surface.
  46. The gaming system of claim 44, wherein the first type of energy and second type of energy are presented at the same location such that the LF display assembly presents a volumetric tactile surface at or near to holographic objects.
  47. A gaming system comprising: a gaming application configured to produce holographic video game data for a light field game experience during execution of the gaming application;a light field (LF) display assembly configured to present holographic content;and a light field processing engine configured to: receive the holographic video game data;and generate display instructions for the light field display assembly to present the holographic video game data as holographic content using the LF display assembly, the holographic content representing a scene within the light field game experience, a light field recording assembly comprising one or more energy sensors configured to record electromagnetic energy as light field content.
  48. The gaming system of claim 48, wherein the holographic video game data includes the recorded light field content.
  49. The gaming system of claim 49, wherein the gaming application produces holographic video game content based on the recorded light field content.
  50. A gaming system comprising: a gaming application configured to produce holographic video game data for a light field game experience during execution of the gaming application;a light field (LF) display assembly configured to present holographic content;and a light field processing engine configured to: receive the holographic video game data;and generate display instructions for the light field display assembly to present the holographic video game data as holographic content using the LF display assembly, the holographic content representing a scene within the light field game experience, a light field recording assembly comprising one or more pressure sensors configured to record mechanical energy.
  51. The gaming system of claim 51, wherein the holographic video game data includes the recorded mechanical energy.
  52. The gaming system of claim 51, wherein the gaming application produces holographic video game content based on the recorded mechanical energy.
  53. A gaming system comprising: a gaming application configured to generate a plurality of inputs for a light field game experience during execution of the gaming application;a network interface configured to: transmit the plurality of inputs to a gaming server via a network connection, and receive, via the network connection, holographic video game content comprising a plurality of display instructions for presenting the holographic video game content of the light field game experience;and a light field (LF) display assembly configured to input the display instructions and present the holographic video game content as a scene within the light field gaming experience, wherein: the gaming system is further configured to access a configuration of a location of the LF display assembly, the network interface is further configured to transmit the configuration of the location of the LF display assembly to the gaming server, and the received holographic content representing the scene is configured for presentation using a LF display system in the location with the configuration.
  54. A gaming system comprising: a gaming application configured to generate a plurality of inputs for a light field game experience during execution of the gaming application;a network interface configured to: transmit the plurality of inputs to a gaming server via a network connection, and receive, via the network connection, holographic video game content comprising a plurality of display instructions for presenting the holographic video game content of the light field game experience;and a light field (LF) display assembly configured to input the display instructions and present the holographic video game content as a scene within the light field gaming experience, wherein the network interface is further configured to transmit holographic video game data to one or more additional LF display assemblies via the network connection, and wherein the one or more additional LF display assemblies are at a different location than the LF display assembly.
  55. A gaming system comprising: a gaming application configured to generate a plurality of inputs for a light field game experience during execution of the gaming application;a network interface configured to: transmit the plurality of inputs to a gaming server via a network connection, and receive, via the network connection, holographic video game content comprising a plurality of display instructions for presenting the holographic video game content of the light field game experience;and a light field (LF) display assembly configured to input the display instructions and present the holographic video game content as a scene within the light field gaming experience, wherein the network interface is further configured to transmit, via the network connection, holographic video game data to the LF display assembly configured to present the holographic video game data as holographic content to an audience of an eSports competition, and wherein the audience and LF display assembly are located at a different location than the gaming system.
  56. A gaming system comprising: a gaming application configured to generate a plurality of inputs for a light field game experience during execution of the gaming application;a network interface configured to: transmit the plurality of inputs to a gaming server via a network connection, and receive, via the network connection, holographic video game content comprising a plurality of display instructions for presenting the holographic video game content of the light field game experience;a light field (LF) display assembly configured to input the display instructions and present the holographic video game content as a scene within the light field gaming experience;and a light field recording assembly comprising one or more pressure sensors configured to record mechanical energy.

Disclaimer: Data collected from the USPTO and may be malformed, incomplete, and/or otherwise inaccurate.