U.S. Pat. No. 10,953,330
REALITY VS VIRTUAL REALITY RACING
AssigneeBUXTON GLOBAL ENTERPRISES, INC.
Issue DateJuly 22, 2019
U.S. Patent No. 10,953,330: Reality vs Virtual Reality Racing
U.S. Patent No. 10,953,330: Reality vs Virtual Reality Racing
Issued March 23, 2021, to Buxton Glob. Enter. Inc.
Filed: July 22, 2019 (claiming priority to July 7, 2017)
Overview:
U.S. Patent No. 10,953,330 (the ‘330 patent) relates to tracking the locations of a physical object and a virtual vehicle on a racetrack and displaying them to merge physical and virtual racing. The ‘330 patent describes a method for displaying a virtual vehicle which takes several points of view of a racecourse, sometimes including one in a physical vehicle, and provides them to a simulation system which creates the virtual vehicle. This system takes the points of view in the real world and mirrors them in the virtual world to allow virtual and real drivers to compete in the “same” space. The points of view in both worlds are also used to display the physical object in the virtual world. As the virtual vehicle moves the virtual points of view are used by the simulation system to calculate which parts of the virtual vehicle are visible from points of view in the real world, including when it is obscured by physical objects. This information is used to provide to a display system the visible parts of the virtual vehicle to a real-life driver or an audience in some versions.
The ‘330 patent aims to allow racers of both the physical and virtual world to compete in the same world. In some versions, the ‘330 patent provides predictive information to audience members like trajectory information or likelihood of a virtual vehicle overtaking a physical one. This could be an interesting fusion of e-sports and real-life sports for players, drivers, and audiences alike.
Abstract:
A method for displaying a virtual vehicle includes identifying a position of a physical vehicle at a racecourse, identifying a position of a point of view at the racecourse, providing a portion of the virtual vehicle visible from a virtual position of the point of view. The method operates by calculating the virtual position within a virtual world based on the position of the point of view. A system for displaying virtual vehicles includes a first sensor detecting a position of a physical vehicle at a racecourse, a second sensor detecting a position of a point of view at the racecourse, and a simulation system providing a portion of the virtual vehicle visible from a virtual position of the point of view. The simulation system is configured to calculate the virtual position of the point of view within a virtual world based on the position of the point of view.
Illustrative Claim:
The invention claimed is:
- A method for displaying a virtual vehicle comprising: identifying respective positions of multiple points of view at a racecourse; providing the respective positions of the points of view at the racecourse to a simulation system; providing a position of a physical object at the racecourse to a simulation system; calculating, by the simulation system, a virtual world comprising the virtual vehicle; calculating, by the simulation system, respective virtual positions of the points of view within the virtual world based on the respective positions of the points of view at the racecourse; calculating, by the simulation system, a representation of the physical object in the virtual world between the respective virtual positions of the points of view and the virtual vehicle within the virtual world; calculating, by the simulation system, respective portions of the virtual vehicle within the virtual world that are visible from the corresponding virtual positions of the points of view, wherein the respective portions of the virtual vehicle within the virtual world that are visible from the corresponding virtual positions of the points of view comprise respective portions of the virtual vehicle that are unobscured, from the respective virtual position, by the representation of the physical object; outputting, by the simulation system, the respective portions of the virtual vehicle visible from the virtual positions of the points of view; providing, to a display system, the respective portions of the virtual vehicle visible from the virtual positions of the points of view; generating, at the display system, representations of the respective portions of the virtual vehicle visible from the virtual positions of the points of view; and displaying a series of representations of the virtual vehicle over a period of time to simulate a trajectory of the virtual vehicle on the racecourse, wherein the series of representations comprises the generated representations.
Illustrative Figure
Abstract
A method for displaying a virtual vehicle includes identifying a position of a physical vehicle at a racecourse, identifying a position of a point of view at the racecourse, providing a portion of the virtual vehicle visible from a virtual position of the point of view. The method operates by calculating the virtual position within a virtual world based on the position of the point of view. A system for displaying virtual vehicles includes a first sensor detecting a position of a physical vehicle at a racecourse, a second sensor detecting a position of a point of view at the racecourse, and a simulation system providing a portion of the virtual vehicle visible from a virtual position of the point of view. The simulation system is configured to calculate the virtual position of the point of view within a virtual world based on the position of the point of view.
Description
DETAILED DESCRIPTION Embodiments described herein merge real world and virtual world racing competitions. For example, real world racing champions and virtual world racing champions can compete to determine an overall champion. Advantageously, each champion can stay within their respective “world” and still compete with a champion from another “world.” In effect, embodiments described herein enable live participants to compete against virtual participants. The terms “physical” and “real-world” are used interchangeably herein and to contrast with “virtual world.” For example, a “physical vehicle” or “real-world vehicle” can be physically present on or at a racecourse. A “virtual vehicle” cannot be physically present on the same racecourse. For example, a “virtual vehicle” may be a graphically generated vehicle that is shown on a display. In some embodiments, a “virtual vehicle” is a representation in a software-based environment. In some embodiments, a method for displaying a virtual vehicle includes identifying a position of a physical vehicle at a racecourse, identifying a position of a point of view at the racecourse, and providing, to a display system, a portion of the virtual vehicle visible from a virtual position of the point of view. Problems solved by embodiments disclosed herein can include overcoming the lack of realism experienced by users of prior solutions. In some embodiments herein, providing visible portions of the virtual vehicle to the user increases the realism experienced by the user. The increased realism provides a reliable and re-producible user experience by providing a real-world race that includes a virtual vehicle. In some embodiments, the visible portion of the virtual vehicle is calculated based on a virtual position of the physical vehicle in a virtual world, a virtual position of the point of view in the virtual world, and a virtual position of the virtual vehicle in the virtual world. Problems solved ...
DETAILED DESCRIPTION
Embodiments described herein merge real world and virtual world racing competitions. For example, real world racing champions and virtual world racing champions can compete to determine an overall champion. Advantageously, each champion can stay within their respective “world” and still compete with a champion from another “world.” In effect, embodiments described herein enable live participants to compete against virtual participants.
The terms “physical” and “real-world” are used interchangeably herein and to contrast with “virtual world.” For example, a “physical vehicle” or “real-world vehicle” can be physically present on or at a racecourse. A “virtual vehicle” cannot be physically present on the same racecourse. For example, a “virtual vehicle” may be a graphically generated vehicle that is shown on a display. In some embodiments, a “virtual vehicle” is a representation in a software-based environment.
In some embodiments, a method for displaying a virtual vehicle includes identifying a position of a physical vehicle at a racecourse, identifying a position of a point of view at the racecourse, and providing, to a display system, a portion of the virtual vehicle visible from a virtual position of the point of view. Problems solved by embodiments disclosed herein can include overcoming the lack of realism experienced by users of prior solutions. In some embodiments herein, providing visible portions of the virtual vehicle to the user increases the realism experienced by the user. The increased realism provides a reliable and re-producible user experience by providing a real-world race that includes a virtual vehicle.
In some embodiments, the visible portion of the virtual vehicle is calculated based on a virtual position of the physical vehicle in a virtual world, a virtual position of the point of view in the virtual world, and a virtual position of the virtual vehicle in the virtual world. Problems solved by embodiments disclosed herein can include how to provide a visible portion of a virtual vehicle. In some embodiments herein, providing visible portions of the virtual vehicle through a virtual calculation of the visible portion increases the accuracy of the visible portion determination. The increased accuracy provides a reliable and re-producible user experience by providing a real-world race that includes a virtual vehicle. In some embodiments herein, providing visible portions through a virtual calculation increases the efficiency of the calculation. The increased efficiency reduces power usage and improves representation speed for a more seamless user experience.
In some embodiments, the portion of the virtual vehicle visible from the virtual position of the point of view includes a portion of the virtual vehicle that is unobscured, from the virtual position of the point of view, by a representation of the physical vehicle at a virtual position of the physical vehicle in the virtual world.
In some embodiments, the method further includes simulating, by the simulation system, an interaction between the virtual vehicle and the representation of the physical vehicle in the virtual world, the portion of the virtual vehicle visible from the virtual position of the point of view is calculated based on the interaction.
In some embodiments, the position of the point of view at the racecourse includes a point of view of an operator of the physical vehicle, and identifying a position of a point of view at the racecourse includes detecting, at a sensor, the point of view of the operator of the physical vehicle, the method further including: identifying a position of a physical object; receiving kinematics information of the virtual vehicle; generating, at a display system, a representation of the virtual vehicle based on the position of the physical object, the position of the point of view at the racecourse, and the kinematics information; and displaying the representation of the virtual vehicle such that the virtual vehicle is aligned with the physical object from the perspective of the position of the point of view at the racecourse.
In some embodiments, the method further includes generating, at a display system, the representation of the portion of the virtual vehicle visible from the virtual position of the point of view.
In some embodiments, the method further includes displaying, by the display system, a series of representations of the virtual vehicle over a period of time to simulate a trajectory of the virtual vehicle on the racecourse, the series of representations includes the representation of the portion of the virtual vehicle visible form the virtual position of the point of view. In some embodiments, a predicted trajectory of the virtual vehicle is displayed. The prediction may be based on current trajectory, acceleration, current vehicle parameters, etc. This may allow an audience member to anticipate if a virtual vehicle is likely to overtake a physical vehicle. The predicted trajectory may be presented as a line, such as a yellow line. Other displays may also be included, such as “GOING TO PASS!” or “GOING TO CRASH!”
In some embodiments, the method further includes storing, by the display system, a digital 3-D model of the virtual vehicle used to generate each representation from the series of representations, each representation is generated by the display system based on the digital 3-D model.
In some embodiments, the method further includes receiving a digital 3-D model of the virtual vehicle used to generate each representation from the series of representations, each representation is generated by the display system based on the digital 3-D model.
In some embodiments, the kinematics information includes one or more vectors of motion, one or more scalars of motion, a position vector, a GPS location, a velocity, an acceleration, an orientation, or a combination thereof of the virtual vehicle.
In some embodiments, identifying the position of the physical vehicle includes detecting one or more vectors of motion, one or more scalars of motion, a position vector, a GPS location, a velocity, an acceleration, an orientation, or a combination thereof of the virtual vehicle.
In some embodiments, identifying the position of the point of view at the racecourse includes detecting a spatial position of a head of an operator of the physical vehicle. In some embodiments, the method further includes transmitting, by a telemetry system coupled to the physical vehicle, the spatial position to a simulator system; receiving, at the telemetry system, information related to the portion of the virtual vehicle visible from the virtual position of the point of view; and displaying, to the operator of the physical vehicle, the representation of the portion of the virtual vehicle based on the information.
In some embodiments, the method further includes displaying the representation of the portion of the virtual vehicle includes: translating the information into a set of graphical elements, displaying the representation of the portion includes displaying the set of graphical elements. In some embodiments, the method further includes computing, at the simulation system, the information related to the portion visible from the virtual position of the point of view.
In some embodiments, displaying the series of representations of the virtual vehicle includes displaying the series of representation on a display of the physical vehicle, and the display is a transparent organic light-emitting diode (T-OLED) display that allows light to pass through the T-OLED to display the field of view to the operator.
In some embodiments, displaying the series of representations of the virtual vehicle includes displaying the series of representations on a display of the physical vehicle, and the display is an LCD display, the method further including: capturing, by a camera coupled to the physical vehicle, an image representing the field of view of the physical world as seen by the operator on the display in the physical vehicle; and outputting the image on a side of the LCD display to display the field of view to the operator, the series of representations are overlaid on the image displayed by the LCD display.
In some embodiments, displaying the series of representations of the virtual vehicle includes displaying the series of representations on a display of the physical vehicle, and the display includes a front windshield of the physical vehicle, one or more side windows of the physical vehicle, a rear windshield of the physical vehicle, one or more side mirrors, a rearview mirror, or a combination thereof.
In some embodiments, displaying the series of representations of the virtual vehicle includes displaying the series of representations on a display of a headset worn by the operator. In some embodiments, the headset is a helmet.
In some embodiments, identifying the position of the point of view at the racecourse includes detecting one or more of a spatial position of a user's eyes, a gaze direction of the user's eyes, or a focus point of the user's eyes.
In some embodiments, the method further includes: providing the position of the physical vehicle and the position of the point of view at the racecourse to a simulation system; calculating, by the simulation system, a virtual world including the virtual vehicle and a representation of the physical vehicle; calculating, by the simulation system, a virtual position of the point of view within the virtual world based on the position of the point of view at the racecourse; and calculating, by the simulation system, the portion of the virtual vehicle visible from the virtual position of the point of view, and providing, to a display system, the portion of the virtual vehicle visible from the virtual position of the point of view includes outputting, by the simulation system, the portion of the virtual vehicle visible from the virtual position of the point of view. Problems solved by embodiments disclosed herein can include how to calculate a visible portion of a virtual vehicle. In some embodiments herein, calculating the visible portion of the virtual vehicle in a virtual world increases the accuracy of the visible portion determination. The increased accuracy provides a reliable and re-producible user experience by providing a real-world race that includes a virtual vehicle. In some embodiments herein, providing visible portions through a virtual calculation increases the efficiency of the calculation. The increased efficiency reduces power usage and improves representation speed for a more seamless user experience.
In some embodiments, identifying the position of the physical vehicle includes receiving a location of each of two portions of the vehicle. In some embodiments, identifying the position of the physical vehicle includes receiving a location of one portion of the vehicle and an orientation of the vehicle. In some embodiments, receiving the orientation of the vehicle includes receiving gyroscope data. Problems solved by embodiments disclosed herein can include how to correctly position a physical vehicle in a virtual world for determining a visible portion of a virtual vehicle. In some embodiments herein, using a measure of orientation provides for accurate placement of the physical vehicle in the virtual world. The increased accuracy provides for a more faithful display of the visible portions of the vehicle, thereby improving the user experience.
In some embodiments, the position of the point of view at the racecourse includes a position of a point of view of an operator of the physical vehicle at the racecourse. In some embodiments, the position of the point of view at the racecourse includes a position of a point of view of an audience member present at a racecourse and observing the physical vehicle on the racecourse. In some embodiments, the position of the point of view at the racecourse includes a position of a camera present at a racecourse and imaging the physical vehicle on the racecourse. In some embodiments, the camera images a portion of the racecourse on which the physical vehicle is racing. When the physical vehicle is travelling across the portion of the racecourse being captured by the camera, the camera may capture the physical vehicle in its video feed. When the physical vehicle is not travelling across the portion of the racecourse being captured by the camera, the camera may still capture the portion of the racecourse.
In some embodiments, identifying the position of the point of view at the racecourse includes at least one of measuring a point of gaze of eyes, tracking eye movement, tracking head position, identifying a vector from one or both eyes to a fixed point on the physical vehicle, identifying a vector from a point on the head to a fixed point on the physical vehicle, identifying a vector from a point on eye-wear to a fixed point on the physical vehicle, identifying a vector from a point on a head gear to a fixed point on the physical vehicle, identifying a vector from one or both eyes to a fixed point in a venue, identifying a vector from a point on the head to a fixed point in the venue, identifying a vector from a point on eye-wear to a fixed point in the venue, or identifying a vector from a point on a head gear to a fixed point in the venue. In some embodiments, identifying the position of the point of view at the racecourse includes measuring the point of gaze of the eyes and the measuring includes measuring light reflection or refraction from the eyes.
In some embodiments, providing the position of the physical vehicle and the position of the point of view at the racecourse includes wireless transmitting at least one position.
In some embodiments, calculating a virtual world includes transforming physical coordinates of the physical vehicle to coordinates in the virtual world and the virtual position of the physical vehicle includes the virtual coordinates.
In some embodiments, calculating the portion of the virtual vehicle visible from the virtual position of the point of view includes: calculating a representation of the physical vehicle in the virtual world, calculating a representation of a physical object in the virtual world between the point of view and the virtual vehicle within the virtual world, and extracting a portion of the virtual vehicle that is unobscured, from the virtual position of the point of view, by the representation of the physical vehicle and the representation of the physical object. In some embodiments, the portion of the virtual vehicle within the virtual world that is visible from the virtual position of the point of view includes the unobscured portion. Problems solved by embodiments disclosed herein can include how to calculate a visible portion of a virtual vehicle, including more than just the portion that is not obscured by the physical vehicle. In some embodiments herein, calculating the visible portion in a virtual world that includes physical objects in the real world increases the accuracy of the visible portion determination. The increased accuracy provides a reliable and re-producible user experience by providing a real-world race that includes a virtual vehicle. In some embodiments herein, providing visible portions through a virtual calculation increases the efficiency of the calculation. The increased efficiency reduces power usage and improves representation speed for a more seamless user experience.
In some embodiments, extracting the portions of the virtual vehicle may include determining which pixels are obstructed by other representations, and only displaying pixels that are not obstructed by other representations. In some embodiments, extracting the portions of the virtual vehicle may include setting a pixel alpha value of zero percent (in RGBA space) for all pixels obstructed by other representations. For example, portions of the virtual vehicle may be obstructed by other virtual representations, e.g., another virtual vehicle, or representations of physical objects, e.g., objects within a physical vehicle or the physical vehicle itself. Any observed (from the virtual position of the point of view) pixel values can be used to provide the portions of the virtual vehicle that are visible from the virtual position of the point of view. In some embodiments, the pixels of unobscured and observed portions of the virtual vehicle can each be set to include an alpha value greater than zero percent (in RGBA space) to indicate that those unobscured pixels can be seen and should be displayed. In contrast, pixels set to an alpha value of zero percent indicate that those pixels are fully transparent, i.e., invisible, and would not be displayed.
In some embodiments, calculating the representation of the physical object between the virtual position of the point of view and the representation of the physical vehicle includes accessing a database of representations to obtain a virtual position of the physical object.
In some embodiments, the portion of the virtual vehicle visible from the virtual position of the point of view consists of portions of the virtual vehicle that are unobscured by other representations in the virtual world.
In some embodiments, the portion of the virtual vehicle visible from the virtual position of the point of view includes a virtual shadow in the virtual world. In some embodiments, the virtual shadow is at least one of a shadow projected by the virtual vehicle and a shadow projected onto the virtual vehicle. In some embodiments, the portion of the virtual vehicle visible from the virtual position of the point of view includes a virtual reflection. In some embodiments, the virtual reflection is at least one of a reflection of the virtual vehicle and a reflection on the virtual vehicle.
In some embodiments, calculating, by the simulation system, a portion of the virtual vehicle within the virtual world that is visible from the virtual position of the point of view includes calculating a field of view from the virtual position of the point of view and providing, to the display system, the portion of the virtual vehicle visible from the virtual position of the point of view includes displaying the portion of the virtual vehicle within the field of view.
In some embodiments, calculating, by the simulation system, a portion of the virtual vehicle within the virtual world that is visible from the position of the virtual point of view includes calculating a field of view from the virtual position of the point of view and providing, to the display system, the portion of the virtual vehicle visible from the virtual position of the point of view consists of displaying the portion of the virtual vehicle visible within the field of view.
In some embodiments, the method may facilitate a competition between two virtual vehicles on a physical racecourse. In a scenario where two virtual vehicles compete on a physical racecourse without any physical vehicles, then the step of “identifying a position of a physical vehicle” would be unnecessary. The method could include identifying a position of a point of view at the racecourse and providing, to a display system, a portion of the virtual vehicle visible from the position of the point of view at the racecourse. All aspects of the foregoing methods not concerning to the position of the physical vehicle could be applied in such embodiment. In some embodiments, the virtual vehicles are given special properties and a video game appearance. In some embodiments, video game attributes (i.e., virtual objects) can be similarly applied to physical vehicles by overlaying those video game attributes on top of the physical vehicles. For example, cars can be given boosts, machine guns, missiles (other graphical virtual objects put into the real world view), virtual jumps, etc. Viewers at the racecourse and at home could view the virtual competitors on the physical racecourse as if competing in the real-world.
In some embodiments, a method for displaying a virtual vehicle includes means for identifying a position of a physical vehicle at a racecourse, means for identifying a position of a point of view at the racecourse, and means for providing, to a display system, a portion of the virtual vehicle visible from a virtual position of the point of view calculated within a virtual world based on the position of the point of view at the racecourse. Problems solved by embodiments disclosed herein can include overcoming the lack of realism experienced by users of prior solutions. In some embodiments herein, providing visible portions of the virtual vehicle to the user increases the realism experienced by the user. The increased realism provides a reliable and re-producible user experience by providing a real-world race that includes a virtual vehicle.
In some embodiments, the portion of the virtual vehicle visible from the virtual position of the point of view includes a portion of the virtual vehicle that is unobscured, from the virtual position of the point of view, by a representation of the physical vehicle at a virtual position of the physical vehicle in the virtual world. Problems solved by embodiments disclosed herein can include how to provide a visible portion of a virtual vehicle. In some embodiments herein, providing visible portions of the virtual vehicle through a virtual calculation of the visible portion increases the accuracy of the visible portion determination. The increased accuracy provides a reliable and re-producible user experience by providing a real-world race that includes a virtual vehicle. In some embodiments herein, providing visible portions through a virtual calculation increases the efficiency of the calculation. The increased efficiency reduces power usage and improves representation speed for a more seamless user experience.
In some embodiments, the method further includes means for simulating, by the simulation system, an interaction between the virtual vehicle and the representation of the physical vehicle in the virtual world, the portion of the virtual vehicle visible from the virtual position of the point of view is calculated based on the interaction.
In some embodiments, the position of the point of view at the racecourse includes a point of view of an operator of the physical vehicle, and means for identifying a position of a point of view at the racecourse includes means for detecting, at a sensor, the point of view of the operator of the physical vehicle, the method further including: means for identifying a position of a physical object; means for receiving kinematics information of the virtual vehicle; means for generating, at a display system, a representation of the virtual vehicle based on the position of the physical object, the position of the point of view at the racecourse, and the kinematics information; and means for displaying the representation of the virtual vehicle such that the virtual vehicle is aligned with the physical object from the perspective of the position of the point of view at the racecourse. Problems solved by embodiments disclosed herein can include how to calculate a visible portion of a virtual vehicle. In some embodiments herein, calculating the visible portion of the virtual vehicle in a virtual world increases the accuracy of the visible portion determination. The increased accuracy provides a reliable and re-producible user experience by providing a real-world race that includes a virtual vehicle. In some embodiments herein, providing visible portions through a virtual calculation increases the efficiency of the calculation. The increased efficiency reduces power usage and improves representation speed for a more seamless user experience.
In some embodiments, the method further includes means for generating, at a display system, the representation of the portion of the virtual vehicle visible from the virtual position of the point of view.
In some embodiments, the method further includes means for displaying, by the display system, a series of representations of the virtual vehicle over a period of time to simulate a trajectory of the virtual vehicle on the racecourse, the series of representations includes the representation of the portion of the virtual vehicle visible form the virtual position of the point of view. In some embodiments, a predicted trajectory of the virtual vehicle is displayed. The prediction may be based on current trajectory, acceleration, current vehicle parameters, etc. This may allow an audience member to anticipate if a virtual vehicle is likely to overtake a physical vehicle. The predicted trajectory may be presented as a line, such as a yellow line. Other displays may also be included, such as “GOING TO PASS!” or “GOING TO CRASH!”
In some embodiments, the method further includes means for storing, by the display system, a digital 3-D model of the virtual vehicle used to generate each representation from the series of representations, each representation is generated by the display system based on the digital 3-D model.
In some embodiments, the method further includes means for receiving a digital 3-D model of the virtual vehicle used to generate each representation from the series of representations, each representation is generated by the display system based on the digital 3-D model.
In some embodiments, the kinematics information includes one or more vectors of motion, one or more scalars of motion, a position vector, a GPS location, a velocity, an acceleration, an orientation, or a combination thereof of the virtual vehicle.
In some embodiments, means for identifying the position of the physical vehicle includes means for detecting one or more vectors of motion, one or more scalars of motion, a position vector, a GPS location, a velocity, an acceleration, an orientation, or a combination thereof of the virtual vehicle.
In some embodiments, means for identifying the position of the point of view at the racecourse includes means for detecting a spatial position of a head of an operator of the physical vehicle. In some embodiments, the method further includes means for transmitting, by a telemetry system coupled to the physical vehicle, the spatial position to a simulator system; means for receiving, at the telemetry system, information related to the portion of the virtual vehicle visible from the virtual position of the point of view; and means for displaying, to the operator of the physical vehicle, the representation of the portion of the virtual vehicle based on the information.
In some embodiments, the method further includes means for displaying the representation of the portion of the virtual vehicle includes: means for translating the information into a set of graphical elements, means for displaying the representation of the portion includes means for displaying the set of graphical elements. In some embodiments, the method further includes means for computing, at the simulation system, the information related to the portion visible from the virtual position of the point of view.
In some embodiments, means for displaying the series of representations of the virtual vehicle includes means for displaying the series of representation on a display of the physical vehicle, and the display is a transparent organic light-emitting diode (T-OLED) display that allows light to pass through the T-OLED to display the field of view to the operator.
In some embodiments, means for displaying the series of representations of the virtual vehicle includes means for displaying the series of representations on a display of the physical vehicle, and the display is an LCD display, the method further including: means for capturing, by a camera coupled to the physical vehicle, an image representing the field of view of the physical world as seen by the operator on the display in the physical vehicle; and means for outputting the image on a side of the LCD display to display the field of view to the operator, the series of representations are overlaid on the image displayed by the LCD display.
In some embodiments, means for displaying the series of representations of the virtual vehicle includes means for displaying the series of representations on a display of the physical vehicle, and the display includes a front windshield of the physical vehicle, one or more side windows of the physical vehicle, a rear windshield of the physical vehicle, one or more side mirrors, a rearview mirror, or a combination thereof.
In some embodiments, means for displaying the series of representations of the virtual vehicle includes means for displaying the series of representations on a display of a headset worn by the operator. In some embodiments, the headset is a helmet.
In some embodiments, means for identifying the position of the point of view at the racecourse includes means for detecting one or more of a spatial position of a user's eyes, a gaze direction of the user's eyes, or a focus point of the user's eyes.
In some embodiments, the method further includes: means for providing the position of the physical vehicle and the position of the point of view at the racecourse to a simulation system; means for calculating, by the simulation system, a virtual world including the virtual vehicle and a representation of the physical vehicle; means for calculating, by the simulation system, a virtual position of the point of view within the virtual world based on the position of the point of view at the racecourse; and means for calculating, by the simulation system, the portion of the virtual vehicle visible from the virtual position of the point of view, and means for providing, to a display system, the portion of the virtual vehicle visible from the virtual position of the point of view includes means for outputting, by the simulation system, the portion of the virtual vehicle visible from the virtual position of the point of view.
In some embodiments, means for identifying the position of the physical vehicle includes means for receiving a location of each of two portions of the vehicle.
In some embodiments, means for identifying the position of the physical vehicle includes means for receiving a location of one portion of the vehicle and an orientation of the vehicle. In some embodiments, means for receiving the orientation of the vehicle includes means for receiving gyroscope data. Problems solved by embodiments disclosed herein can include how to correctly position a physical vehicle in a virtual world for determining a visible portion of a virtual vehicle. In some embodiments herein, using a measure of orientation provides for accurate placement of the physical vehicle in the virtual world. The increased accuracy provides for a more faithful display of the visible portions of the vehicle, thereby improving the user experience.
In some embodiments, the position of the point of view at the racecourse includes a position of a point of view of an operator of the physical vehicle at the racecourse. In some embodiments, the position of the point of view at the racecourse includes a position of a point of view of an audience member present at a racecourse and observing the physical vehicle on the racecourse. In some embodiments, the position of the point of view at the racecourse includes a position of a camera present at a racecourse and the method further includes means for imaging the physical vehicle on the racecourse. In some embodiments, the camera images a portion of the racecourse on which the physical vehicle is racing. When the physical vehicle is travelling across the portion of the racecourse being captured by the camera, the camera may capture the physical vehicle in its video feed. When the physical vehicle is not travelling across the portion of the racecourse being captured by the camera, the camera may still capture the portion of the racecourse.
In some embodiments, means for identifying the position of the point of view at the racecourse includes at least one of means for measuring a point of gaze of eyes, means for tracking eye movement, means for tracking head position, means for identifying a vector from one or both eyes to a fixed point on the physical vehicle, means for identifying a vector from a point on the head to a fixed point on the physical vehicle, means for identifying a vector from a point on eye-wear to a fixed point on the physical vehicle, means for identifying a vector from a point on a head gear to a fixed point on the physical vehicle, means for identifying a vector from one or both eyes to a fixed point in a venue, means for identifying a vector from a point on the head to a fixed point in the venue, means for identifying a vector from a point on eye-wear to a fixed point in the venue, or means for identifying a vector from a point on a head gear to a fixed point in the venue. In some embodiments, means for identifying the position of the point of view at the racecourse includes means for measuring the point of gaze of the eyes and the means for measuring includes means for measuring light reflection or refraction from the eyes.
In some embodiments, means for providing the position of the physical vehicle and the position of the point of view at the racecourse includes means for wireless transmitting at least one position.
In some embodiments, means for calculating a virtual world includes means for transforming physical coordinates of the physical vehicle to coordinates in the virtual world and the virtual position of the physical vehicle includes the virtual coordinates.
In some embodiments, means for calculating the portion of the virtual vehicle visible from the virtual position of the point of view includes: means for calculating a representation of the physical vehicle in the virtual world, means for calculating a representation of a physical object in the virtual world between the point of view and the virtual vehicle within the virtual world, and means for extracting a portion of the virtual vehicle that is unobscured, from the virtual position of the point of view, by the representation of the physical vehicle and the representation of the physical object. In some embodiments, the portion of the virtual vehicle within the virtual world that is visible from the virtual position of the point of view includes the unobscured portion. Problems solved by embodiments disclosed herein can include how to calculate a visible portion of a virtual vehicle, including more than just the portion that is not obscured by the physical vehicle. In some embodiments herein, calculating the visible portion in a virtual world that includes physical objects in the real world increases the accuracy of the visible portion determination. The increased accuracy provides a reliable and re-producible user experience by providing a real-world race that includes a virtual vehicle. In some embodiments herein, providing visible portions through a virtual calculation increases the efficiency of the calculation. The increased efficiency reduces power usage and improves representation speed for a more seamless user experience.
In some embodiments, means for extracting the portions of the virtual vehicle may include means for determining which pixels are obstructed by other representations, and only displaying pixels that are not obstructed by other representations. In some embodiments, means for extracting the portions of the virtual vehicle may include means for setting a pixel alpha value of zero percent (in RGBA space) for all pixels obstructed by other representations. For example, portions of the virtual vehicle may be obstructed by other virtual representations, e.g., another virtual vehicle, or representations of physical objects, e.g., objects within a physical vehicle or the physical vehicle itself. Any observed (from the virtual position of the point of view) pixel values can be used to provide the portions of the virtual vehicle that are visible from the virtual position of the point of view. In some embodiments, the pixels of unobscured and observed portions of the virtual vehicle can each be set to include an alpha value greater than zero percent (in RGBA space) to indicate that those unobscured pixels can be seen and should be displayed. In contrast, pixels set to an alpha value of zero percent indicate that those pixels are fully transparent, i.e., invisible, and would not be displayed.
In some embodiments, means for calculating the representation of the physical object between the virtual position of the point of view and the representation of the physical vehicle includes means for accessing a database of representations to obtain a virtual position of the physical object.
In some embodiments, the portion of the virtual vehicle visible from the virtual position of the point of view consists of portions of the virtual vehicle that are unobscured by other representations in the virtual world.
In some embodiments, the portion of the virtual vehicle visible from the virtual position of the point of view includes a virtual shadow in the virtual world. In some embodiments, the virtual shadow is at least one of a shadow projected by the virtual vehicle and a shadow projected onto the virtual vehicle. In some embodiments, the portion of the virtual vehicle visible from the position of the point of view at the racecourse includes a virtual reflection. In some embodiments, the virtual reflection is at least one of a reflection of the virtual vehicle and a reflection on the virtual vehicle.
In some embodiments, means for calculating, by the simulation system, a portion of the virtual vehicle within the virtual world that is visible from the virtual position of the point of view includes means for calculating a field of view from the virtual position of the point of view and means for providing, to the display system, the portion of the virtual vehicle visible from the virtual position of the point of view includes displaying the portion of the virtual vehicle within the field of view.
In some embodiments, means for calculating, by the simulation system, a portion of the virtual vehicle within the virtual world that is visible from the position of the virtual point of view includes means for calculating a field of view from the virtual position of the point of view and means for providing, to the display system, the portion of the virtual vehicle visible from the virtual position of the point of view consists of means for displaying the portion of the virtual vehicle within the field of view.
In some embodiments, a system for displaying virtual vehicles includes a first sensor detecting a position of a physical vehicle at a racecourse, a second sensor detecting a position of a point of view at the racecourse, and a simulation system outputting a portion of the virtual vehicle visible from a virtual position of the point of view. Problems solved by embodiments disclosed herein can include overcoming the lack of realism experienced by users of prior solutions. In some embodiments herein, providing visible portions of the virtual vehicle to the user increases the realism experienced by the user. The increased realism provides a reliable and re-producible user experience by providing a real-world race that includes a virtual vehicle.
In some embodiments, the simulation system determines the visible portion of the virtual vehicle based on a virtual position of the physical vehicle in a virtual world, a virtual position of the point of view in the virtual world, and a virtual position of the virtual vehicle in the virtual world. Problems solved by embodiments disclosed herein can include how to provide a visible portion of a virtual vehicle. In some embodiments herein, providing visible portions of the virtual vehicle through a virtual calculation of the visible portion increases the accuracy of the visible portion determination. The increased accuracy provides a reliable and re-producible user experience by providing a real-world race that includes a virtual vehicle. In some embodiments herein, providing visible portions through a virtual calculation increases the efficiency of the calculation. The increased efficiency reduces power usage and improves representation speed for a more seamless user experience.
In some embodiments, the portion of the virtual vehicle visible from the virtual position of the point of view includes a portion of the virtual vehicle that is unobscured, from the virtual position of the point of view, by a representation of the physical vehicle at the virtual position of the physical vehicle.
In some embodiments, the system further includes the simulation system configured to simulate an interaction between the virtual vehicle and the representation of the physical vehicle in the virtual world, the portion of the virtual vehicle visible from the virtual position of the point of view is calculated based on the interaction.
In some embodiments, the system includes a simulation system configured to: receive the position of the physical vehicle and the position of the point of view at the racecourse; calculate a virtual world including the virtual vehicle and a representation of the physical vehicle; calculate a virtual position of the point of view within the virtual world based on the position of the point of view at the racecourse; calculate the portion of the virtual vehicle visible from the virtual position of the point of view; and output, to the display system, the portion of the virtual vehicle visible from the virtual position of the point of view. Problems solved by embodiments disclosed herein can include how to calculate a visible portion of a virtual vehicle. In some embodiments herein, calculating the visible portion of the virtual vehicle in a virtual world increases the accuracy of the visible portion determination. The increased accuracy provides a reliable and re-producible user experience by providing a real-world race that includes a virtual vehicle. In some embodiments herein, providing visible portions through a virtual calculation increases the efficiency of the calculation. The increased efficiency reduces power usage and improves representation speed for a more seamless user experience.
In some embodiments, the first sensor receives a location of each of two portions of the vehicle. In some embodiments, the first sensor receives a location of one portion of the vehicle and an orientation of the vehicle. In some embodiments, receiving the orientation of the vehicle includes receiving gyroscope data. Problems solved by embodiments disclosed herein can include how to correctly position a physical vehicle in a virtual world for determining a visible portion of a virtual vehicle. In some embodiments herein, using a measure of orientation provides for accurate placement of the physical vehicle in the virtual world. The increased accuracy provides for a more faithful display of the visible portions of the vehicle, thereby improving the user experience.
In some embodiments, the position of the point of view at the racecourse includes a position of a point of view of an operator of the physical vehicle at the racecourse. In some embodiments, the position of the point of view at the racecourse includes a position of a point of view of an audience member present at a racecourse and observing the physical vehicle on the racecourse.
In some embodiments, the position of the point of view at the racecourse includes a position of a camera present at a racecourse and imaging the physical vehicle on the racecourse. In some embodiments, the camera images a portion of the racecourse on which the physical vehicle is racing. When the physical vehicle is travelling across the portion of the racecourse being captured by the camera, the camera may capture the physical vehicle in its video feed. When the physical vehicle is not travelling across the portion of the racecourse being captured by the camera, the camera may still capture the portion of the racecourse.
In some embodiments, the second sensor is configured to detect the position of the point of view at the racecourse by at least one of measuring the point of gaze of eyes, tracking eye movement, tracking head position, identifying a vector from one or both eyes to a fixed point on the physical vehicle, identifying a vector from a point on the head to a fixed point on the physical vehicle, identifying a vector from a point on eye-wear to a fixed point on the physical vehicle, identifying a vector from a point on a head gear to a fixed point on the physical vehicle, identifying a vector from one or both eyes to a fixed point in a venue, identifying a vector from a point on the head to a fixed point in the venue, identifying a vector from a point on eye-wear to a fixed point in the venue, or identifying a vector from a point on a head gear to a fixed point in the venue. In some embodiments, second sensor is configured to detect the position of the point of view at the racecourse by measuring light reflection or refraction from the eyes.
In some embodiments, receiving the position of the physical vehicle and the position of the point of view at the racecourse includes wireless receiving at least one position.
In some embodiments, calculating a virtual world includes transforming physical coordinates of the physical vehicle to coordinates in the virtual world and the virtual position of the physical vehicle includes the virtual coordinates.
In some embodiments, calculating a portion of the virtual vehicle visible from the position of the point of view at the racecourse includes: calculating a representation of the physical vehicle in the virtual world, calculating a representation of a physical object in the virtual world between the point of view and the virtual vehicle within the virtual world, and extracting a portion of the virtual vehicle that is unobscured, from the virtual position of the point of view, by the representation of the physical vehicle and the representation of the physical object. In some embodiments, the portion of the virtual vehicle within the virtual world that is visible from the virtual position of the point of view includes the unobscured portion. Problems solved by embodiments disclosed herein can include how to calculate a visible portion of a virtual vehicle, including more than just the portion that is not obscured by the physical vehicle. In some embodiments herein, calculating the visible portion in a virtual world that includes physical objects in the real world increases the accuracy of the visible portion determination. The increased accuracy provides a reliable and re-producible user experience by providing a real-world race that includes a virtual vehicle. In some embodiments herein, providing visible portions through a virtual calculation increases the efficiency of the calculation. The increased efficiency reduces power usage and improves representation speed for a more seamless user experience.
In some embodiments, extracting the portions of the virtual vehicle may include determining which pixels are obstructed by other representations, and only displaying pixels that are not obstructed by other representations. In some embodiments, extracting the portions of the virtual vehicle may include setting a pixel alpha value of zero percent (in RGBA space) for all pixels obstructed by other representations. For example, portions of the virtual vehicle may be obstructed by other virtual representations, e.g., another virtual vehicle, or representations of physical objects, e.g., objects within a physical vehicle or the physical vehicle itself. Any observed (from the virtual position of the point of view) pixel values can be used to provide the portions of the virtual vehicle that are visible from the virtual position of the point of view. In some embodiments, the pixels of unobscured and observed portions of the virtual vehicle can each be set to include an alpha value greater than zero percent (in RGBA space) to indicate that those unobscured pixels can be seen and should be displayed. In contrast, pixels set to an alpha value of zero percent indicate that those pixels are fully transparent, i.e., invisible, and would not be displayed.
In some embodiments, calculating the representation of the physical object between the virtual position of the point of view and the representation of the physical vehicle includes accessing a database of representations to obtain a virtual position of the physical object.
In some embodiments, the portion of the virtual vehicle visible from the virtual position of the point of view consists of portions of the virtual vehicle that are unobscured by other representations in the virtual world.
In some embodiments, the portion of the virtual vehicle visible from the virtual position of the point of view includes a virtual shadow in the virtual world. In some embodiments, the virtual shadow is at least one of a shadow projected by the virtual vehicle and a shadow projected onto the virtual vehicle. In some embodiments, the portion of the virtual vehicle visible from the virtual position of the point of view includes a virtual reflection. In some embodiments, the virtual reflection is at least one of a reflection of the virtual vehicle and a reflection on the virtual vehicle.
In some embodiments, calculating a portion of the virtual vehicle within the virtual world that is visible from the virtual position of the point of view includes calculating a field of view from the virtual position of the point of view and providing, to the display system, the portion of the virtual vehicle visible from the virtual position of the point of view includes displaying the portion of the virtual vehicle within the field of view.
In some embodiments, calculating a portion of the virtual vehicle within the virtual world that is visible from the position of the point of view at the racecourse includes calculating a field of view from the virtual position of the point of view and providing, to the display system, the portion of the virtual vehicle visible from the virtual position of the point of view consists of displaying the portion of the virtual vehicle visible within the field of view.
In some embodiments, the system may facilitate a competition between two virtual vehicles on a physical racecourse. In a scenario where two virtual vehicles compete on a physical racecourse without any physical vehicles, then the first sensor detecting a position of a physical vehicle would unnecessary. The system in such an embodiment could include a sensor detecting a position of a point of view at the racecourse and a display system providing a portion of the virtual vehicle visible from the position of the point of view at the racecourse. All aspects of the foregoing systems not concerning to the position of the physical vehicle could be applied in such embodiment. In some embodiments, the virtual vehicles are given special properties and a video game appearance. For example, cars can be given boosts, machine guns, missiles (other graphical virtual objects put into the real world view), virtual jumps, etc. In some embodiments, the physical vehicles can be given similar video game attributes. For example, graphic virtual objects such as machine guns or missiles etc. may be rendered and overlaid on the physical vehicles as observed on a display. Viewers at the racecourse and at home could view the virtual competitors on the physical racecourse as if competing in the real-world.
As used herein, “point of view” can be understood to be a real world position from which the virtual vehicle will be viewed. For example, an operator of a physical vehicle (e.g., a driver in the physical vehicle) viewing his surroundings. A display may be used to augmenting the operator's view by introducing the virtual vehicle in the view. Because the virtual vehicle is added to the real world point of view, if the display system was not provided or discontinued, the real world point of view would not see a virtual vehicle.
FIG. 1is a diagram100of a physical vehicle101, according to some embodiments.FIG. 1provides an example of a point of view of an operator (e.g., driver) of physical vehicle101. Thus, the position of the point of the view of the operator is the position and direction (gaze) of the operator's eyes (or some approximation of the operator's eye position and direction). AlthoughFIG. 1is provided with reference to the point of view of the operator of a physical vehicle, the teachings apply equally to other points of view, such as audience member at a racecourse on which physical vehicle is driving or a camera at the racecourse.
Physical vehicle101includes a display system102(including rendering component107), a simulation component106, a telemetry system104(including sensors108), RF circuitry105, and a force controller112. Physical vehicle101also includes eye-position detector110, front windshield120, rear-view mirror122, rear windshield124, side windows126A and126B, side mirrors128A and128B, seat and head brace130, speakers132, and brakes134.FIG. 1also includes a vehicle operator114. InFIG. 1, vehicle operator114is illustrated wearing a helmet116, a visor over eyes117, and a haptic suit118. In some embodiments, the visor worn over eyes117is a component of helmet116.
As shown inFIG. 1, physical vehicle101is an automobile. In some embodiments, devices within physical vehicle101communicate with a simulation system140to simulate one or more virtual vehicles within a field of view of vehicle operator114. Simulation system140may be on-board physical vehicle101. In some embodiments, as illustrated in diagram100, simulation system140may be remote from physical vehicle101, also described elsewhere herein. In some embodiments, the functionality performed by simulation system140may be distributed across both systems that are on-board physical vehicle101and remote from physical vehicle101. In some embodiments, simulation system140generates and maintains a racing simulation141between one or more live participants (i.e., vehicle operator114operating physical vehicle101) and one or more remote participants (not shown).
Simulating virtual vehicles in real-time enhances the racing experience of vehicle operator114. Implementing simulation capabilities within physical vehicle101allows vehicle operator114, who is a live participant, to compete against a remote participant operating a virtual vehicle within racing simulation141. The field of view of vehicle operator114is the observable world seen by vehicle operator114augmented with a virtual vehicle. In some embodiments, the augmentation can be provided by a display, e.g., displays housed in or combined with one or more of front windshield120, rear-view mirror122, rear windshield124, side windows126A and126B, and side mirrors128A and128B.
In some embodiments, the augmentation can be provided by a hologram device or 3-D display system. In these embodiments, front windshield120, rear-view mirror122, rear windshield124, side windows126A and126B, or side mirrors128A and128B can be T-OLED displays that enable 3-D images to be displayed, utilizing cameras to capture the surroundings displayed with 3-D images overlaid on non-transparent displays.
In some embodiments, the augmentation can be provided by a head-mounted display (HMD) worn by vehicle operator114over eyes117. The HMD may be worn as part of helmet116. In some embodiments, the HMD is imbedded in visors, glasses, goggles, or other devices worn in front of the eyes of vehicle operator114. Like the display described above, the HMD may operate to augment the field of view of vehicle operator114by rendering one or more virtual vehicles on one or more displays in the HMD.
In other embodiments, the HMD implements retinal projection techniques to simulate one or more virtual vehicles. For examples, the HMD may include a virtual retinal display (VRD) that projects images onto the left and right eyes of vehicle operator114to create a three-dimensional (3D) image of one or more virtual vehicles in the field of view of vehicle operator114.
In some embodiments, the augmentation can be provided by one or more displays housed in physical vehicle101as described above (e.g., front windshield120and rear-view mirror122), a display worn by vehicle operator114as described above (e.g., an HMD), a hologram device as described above, or a combination thereof. An advantage of simulating virtual vehicles on multiple types of displays (e.g., on a display housed in physical vehicle101and an HMD worn by vehicle operator114) is that an augmented-reality experience can be maintained when vehicle operator114takes off his HMD. Additionally, multiple participants in physical vehicle101can share the augmented-reality experience regardless of whether each participant is wearing an HMD.
In some embodiments, multiple virtual vehicles are simulated for vehicle operator114. For example, multiple virtual vehicles are displayed in front and/or behind and/or beside the operator. For example, one or more virtual vehicles may be displayed on front windshield120(an example display) and one or more virtual vehicles may be displayed on rear-view mirror122(an example display). In addition, one or more virtual vehicles may be displayed on the HMD. Similarly, physical vehicle101may be one of a plurality of physical vehicles in proximity to each other. In some embodiments, a virtual vehicle being simulated for vehicle operator114can be another physical vehicle running on a physical racecourse at a different physical location than that being run by vehicle operator114. For example, one driver could operate a vehicle on a racecourse in Monaco and another driver could operate a vehicle on replica-racecourse in Los Angeles. Embodiments herein contemplate presenting one or both of the Monaco and Los Angeles driver with a virtual vehicle representing the other driver.
Returning to simulation system140and as described above, simulation system140may include racing simulation141, which simulates a competition between physical vehicle101and one or more virtual vehicles on a virtual racecourse. In some embodiments, the virtual racecourse is generated and stored by simulation system140to correspond to the physical racecourse in which vehicle operator114is operating, e.g., driving, physical vehicle101. In some embodiments, the virtual racecourse is generated using 360 degree laser scan video recording or similar technology. Therefore, as vehicle operator114controls physical vehicle101on the physical racecourse in real-time, the virtual trajectory of physical vehicle101within racing simulation141is simulated by simulation system140to emulate the physical, real-world trajectory of physical vehicle101on the physical racecourse.
In some embodiments, to enable simulation system140to simulate physical vehicle101on the virtual racecourse in racing simulation141, physical vehicle101includes telemetry system104. Telemetry system104includes sensors108that detect data associated with physical vehicle101. Sensors108include one or more devices that detect kinematics information of physical vehicle101. In some embodiments, kinematics information includes one or more vectors of motion, one or more scalars of motion, an orientation, a Global Positioning System (GPS) location, or a combination thereof. For example, a vector of motion may include a velocity, a position vector, or an acceleration. For example, a scalar of motion may include a speed. Accordingly, sensors108may include one or more accelerometers to detect acceleration, one or more GPS (or GLONASS or other global navigation system) receiver to detect the GPS location, one or more motion sensors, one or more orientation sensors, or a combination thereof. In some embodiments, the real-time data collected by sensors108are transmitted to simulation system140. Other real-time data may include measurements of the car, heat, tire temperature, etc. In some embodiments, one or more of the kinematics information and car measurements are used for simulation predictability. For example, some embodiments may include predictive simulation engines that pre build scenes based on these other measurements and the velocity and acceleration information.
In some embodiments, physical vehicle101includes radio frequency (RF) circuitry105for transmitting data, e.g., telemetric information generated by telemetry system104, to simulation system140. RF circuitry105receives and sends RF signals, also called electromagnetic signals. RF circuitry105converts electrical signals to/from electromagnetic signals and communicates with communications networks and other communications devices via the electromagnetic signals. RF circuitry105may include well-known circuitry for performing these functions, including but not limited to an antenna system, an RF transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a CODEC chipset, a subscriber identity module (SIM) card, memory, and so forth. RF circuitry105may communicate with networks, such as the Internet, also referred to as the World Wide Web (WWW), an intranet and/or a wireless network, such as a cellular telephone network, a wireless local area network (LAN) and/or a metropolitan area network (MAN), and other devices by wireless communication. The wireless communication optionally uses any of a plurality of communications standards, protocols and technologies, including but not limited to Global System for Mobile Communications (GSM), Enhanced Data GSM Environment (EDGE), high-speed downlink packet access (HSDPA), high-speed uplink packet access (HSUPA), Evolution, Data-Only (EV-DO), HSPA, HSPA+, Dual-Cell HSPA (DC-HSPDA), long term evolution (LTE), near field communication (NFC), wideband code division multiple access (W-CDMA), code division multiple access (CDMA), time division multiple access (TDMA), Bluetooth, Bluetooth Low Energy (BTLE), Wireless Fidelity (Wi-Fi) (e.g., IEEE 802.11a, IEEE 802.11b, IEEE 802.11g and/or IEEE 802.11n), Wi-MAX, a protocol for e-mail (e.g., Internet message access protocol (IMAP) and/or post office protocol (POP)), instant messaging (e.g., extensible messaging and presence protocol (XMPP), Session Initiation Protocol for Instant Messaging and Presence Leveraging Extensions (SIMPLE), Instant Messaging and Presence Service (IMPS)), and/or Short Message Service (SMS), or any other suitable communication protocol.
In some embodiments, simulation system140includes RF circuitry, similar to RF circuitry105, for receiving data from physical vehicle101. Based on the telemetric information received from physical vehicle101, simulation system140simulates physical vehicle101as an avatar within racing simulation141. In some embodiments, as will be further described with respect toFIG. 5, simulation system140receives inputs for controlling and simulating one or more virtual vehicles within racing simulation. In some embodiments, simulation system140calculates kinematics information of the virtual vehicle based on the received inputs and a current state of the virtual vehicle on the virtual racecourse in racing simulation141. For example, the current state may refer to a coordinate, a position, a speed, a velocity, an acceleration, an orientation, etc., of the virtual vehicle being simulated on the virtual racecourse. To replicate the virtual race between a live participant, i.e., vehicle operator114, and a virtual participant operating a virtual vehicle for vehicle operator114, simulation system140transmits kinematics information of the virtual vehicle to components (e.g., display system102or simulation component106) in physical vehicle101via RF circuitry. As described elsewhere in this disclosure, it is to be understood that depending on the type of context being simulated, the display system102and other components shown in physical vehicle101may be housed in other types of devices.
In some embodiments, to enable two-way interactive racing where interactions simulated in racing simulation141can be reproduced for vehicle operator114driving physical vehicle101, simulation system140determines whether the avatar of physical vehicle101within racing simulation141is in contact with obstacles, such as the virtual vehicle, being simulated within racing simulation141. Upon determining a contact, simulation system140calculates force information, audio information, or a combination thereof associated with the contact. In some embodiments, simulation system140transmits the force or audio information to physical vehicle101where the force or audio information is reproduced at physical vehicle101to enhance the virtual reality racing experience for vehicle operator114.
Returning to physical vehicle101, physical vehicle101includes a display system102for generating a representation of a virtual vehicle based on information, e.g., kinematics information of the virtual vehicle, received from simulation system140via RF circuitry105. In some embodiments, display system102is coupled to a simulation component106that generates a virtual representation of the virtual vehicle based on the kinematics information of the virtual vehicle received from simulation system140. In some embodiments, simulation component106generates the virtual representation based on the kinematics information and eyes measurements (e.g., a spatial position) of the eyes117of vehicle operator114. To further enhance the realism of the virtual representation, i.e., a graphically generated vehicle, simulation component106generates the virtual representation based on the kinematics information, a spatial position of eyes117of vehicle operator114, a gaze direction of eyes117, and a focus point of eyes117, according to some embodiments. As described herein, eyes measurements may include a spatial position of eyes117, a gaze direction of eyes117, a focus point of eyes117, or a combination thereof of the left eye, the right eye, or both left and right eyes.
In some embodiments, physical vehicle101includes eye-position detector110, e.g., a camera or light (e.g., infrared) reflection detector, to detect eyes measurements (e.g., spatial position, gaze direction, focus point, or a combination thereof of eyes117) of eyes117of vehicle operator114. In some embodiments, eye-position detector110detects a spatial position of the head of vehicle operator114to estimate the measurements of eyes117. For example, eye-position detector110may detect helmet116or a visor on helmet116to estimate the measurements of eyes117. Detecting eye measurements and/or head position may also include detecting at least one of a position and an orientation of helmet116worn by vehicle operator114.
The position of the eyes may be calculated directly (e.g., from a fixed sensor, such as a track-side sensor) or based on a combination of the sensor in the car and a position of the car.
In some embodiments, eye-position detector110includes a camera that can record a real-time video sequence or capture a series of images of the face of vehicle operator114. Then, eye-position detector110may track and detect the measurements of eyes117by analyzing the real-time video sequence or the series of images. In some embodiments, eye-position detector110implements one or more algorithms for tracking a head movement or orientation of vehicle operator114to aid in eye tracking, detection, and measurement. As shown in diagram100, eye-position detector110may be coupled to rear-view mirror122. However, as long as eye-position detector110is implemented in proximity to physical vehicle, eye-position detector110can be placed in other locations, e.g., on the dashboard, inside or outside physical vehicle101. In some embodiments, to increase accuracy in detection, eye-position detector110can be implemented within helmet116or other head-mounted displays (HMDs), e.g., visors or goggles, worn by vehicle operator114over eyes117.
In some embodiments, eye-position detector110implemented within a HMD further includes one or more focus-tunable lenses, one or more mechanically actuated displays, and mobile gaze-tracking technology reproduced scenes can be drawn that actually correct common refractive errors in the VR world as the eyes are continually monitored based on where a user looks in a virtual scene. An advantage of the above techniques is that vehicle operator114would not need to wear contact lenses or corrective glasses while wearing a HMD implanting eye-position detector110.
In some embodiments, simulation component106generates the virtual representation of the virtual vehicle based on a 3-D model of the virtual vehicle. For example, the virtual representation may represent a perspective of the 3-D model as viewed from eyes117. In some embodiments, by generating the virtual representation from the perspective of eyes117whose measurements are detected or estimated by eye-position detector110, the virtual representation can be simulated in accurate dimensions and scaling for vehicle operator114to increase the reality of racing against the virtual vehicle. In some embodiments, the 3-D model may be pre-stored on simulation component106or received from simulation system140.
In some embodiments, rendering component107within display system102displays the generated virtual representation on one or more displays of physical vehicle101. As discussed above, the virtual representation may be generated by simulation component106in some embodiments or simulation system140in some embodiments. The one or more displays may include windows of physical vehicle101, e.g., front windshield120or side windows126A-B, or mirrors of physical vehicle101, e.g., rear-view mirror122or side mirrors128A-B. In some embodiments, the one or more displays may be components in helmet116. Helmet116may include a helmet, visors, glasses, or a goggle system worn by vehicle operator114.
In some embodiments, one or more displays (e.g., front windshield120) can be transparent organic light-emitting diode (T-OLED) displays that allow light to pass through the T-OLED to display the field of view to vehicle operator114. In these embodiments, rendering component107renders the virtual representation of the virtual vehicle as a layer of pixels on the one or more displays. The T-OLED displays may allow vehicle operator114to see both the simulated virtual vehicle and the physical, un-simulated world in his field of view.
In other embodiments, one or more displays (e.g., front windshield120) can be non-transparent liquid crystal displays (LCDs). In these embodiments, unlike the T-OLED displays, the LCD cannot allow light to pass through to enable vehicle operator114to see the physical, un-simulated world in his field of view. Instead, in these embodiments, a camera (e.g., a pinhole camera) facing outwards with respect to the LCD and coupled to physical vehicle101can record a live video feed of the physical, un-simulated world and representing the field of view of the physical world as would be seen from eyes117of vehicle operator114if the LCD were to be transparent (e.g., an T-OLED display). Then, rendering component107may display the live video feed on the interior side of the LCD to display the field of view to vehicle operator114. Further, rendering component107may overlay the generated virtual representation on the live video feed being displayed by the LCD to enable vehicle operator114to see the simulated, virtual vehicle.
In some embodiments, the non-transparent LCD cannot by itself display images or live video feeds in 3D. Accordingly, the camera used to record the live video feed may include one or more cameras that are part of a stereoscopic camera system to record the physical world in color and in 3D. In some embodiments, to further enhance the 3D effect of the live video feed being displayed, the non-transparent LCD is a multi-view autostereoscopic 3D display, i.e., an automultiscopic display, that enables vehicle operator114to view the displayed 3D video feed from different angles as vehicle operator114moves his head and as a result, his eyes117.
In high speed racing events, the head of vehicle operator114moves very little. Therefore, in some embodiments, eye-position detector110for tracking a position of the head or eyes117of vehicle operator114may be omitted from physical vehicle101to reduce a number of components and complexity in simulating virtual vehicles at physical vehicle101. In some embodiments, to enable vehicle operator114to freely move his head, eye-position detector110is implemented to track a position of the head or eyes117of vehicle operator114. In embodiments where the display is a non-transparent display, an angle of one or more cameras, e.g., cameras in a stereoscopic camera system, can be adjusted to correspond to the tracked position of the head or eyes117of vehicle operator114.
In some embodiments, to enable two-way interactive racing where interactions simulated on the virtual racecourse in racing simulation141can be replicated for vehicle operator114driving physical vehicle101, simulation component106determines a proximity of the virtual vehicle with physical vehicle101based on the kinematics information of the virtual vehicle received from simulation system140. In some embodiments, upon determining a contact between the virtual vehicle and physical vehicle101based on the determined proximity, simulation system140calculates force information, audio information, or a combination thereof associated with the contact. Then, simulation component106may transmit the force information to force controller112and/or the audio information to speakers132. In some embodiments, playing the audio information via speakers132may emulate the sound of the engine, brakes, tires, and collision between physical vehicle101and the virtual vehicle being simulated by simulation system140. In some embodiments, the audio information may include a volume that is calculated based on a distance calculated between physical vehicle101and the virtual vehicle on the simulated racecourse, and may take into account the directional position of the head and orientation of the ears of vehicle operator114. In some embodiments, speakers132may include audio devices equipped in physical vehicle101(e.g., a loudspeaker or speaker system) or audio devices worn by vehicle operator114(e.g., headphones or earpieces). In embodiments where speakers132are worn by vehicle operator114, speakers132may be implemented within a head-mounted display such as helmet116.
For sound reproduced for observers not in the physical vehicle (e.g., an audience member at a racecourse or watching at home), speakers can be placed around the track or through the headgear or audience member or point of view of camera. In the virtual world, microphone position can be set, just like with camera position. Similarly, sound generations positions can be set. In some embodiments, when the virtual car with a sound generation position is further from the microphone the noise it makes would be less.
In some embodiments, force controller112controls one or more actuators based on the force information to emulate the contact simulated between physical vehicle101and the virtual vehicle on the virtual racecourse simulated by simulation system140. For example, force controller112may control one or more force actuators built into seat and head brace130to emulate how vehicle operator114would feel, e.g., a sensation of being bumped in the head, in a collision. Similarly, in some embodiments, force controller112may communicate via wired or wireless communications with a haptic suit118worn by vehicle operator114to emulate the sensation of vehicle operator114contacting a real physical vehicle. Force controller112may also control one or more force actuators built into the physical vehicle101to emulate the physical vehicle101making contact with a virtual object. In some embodiments, force controller112controls one or more mechanical systems that affect the actual functioning of physical vehicle101. For example, force controller112may control one or more brakes134, a steering column, or a power of physical vehicle101among other mechanical and/or electrical systems to emulate the effects of contact between physical vehicle101and the virtual vehicle or virtual object were the virtual object, such as a virtual vehicle, actually a physical object on the same racecourse as physical vehicle101. As described above, in some embodiments, the virtual object being simulated can be a physical vehicle in the real world on a racecourse at a different physical location. Therefore, the present disclosure also enables two vehicle operators running on different racecourses to feel as if they are competing on the same racecourse since their counterpart physical vehicles can be simulated as virtual vehicles.
In some embodiments involving electric cars, emulating effects may include control over the power generated to the axels, and in some cases to each specific wheel in a car with four electric motors (one for each wheel). This may advantageously allow a bump to be simulated by a small spike down in the motor of the wheel. This can be controlled based on the duration of the impact and other factors.
In some embodiments, as described with respect toFIG. 5, some or all of the functionality of simulation component106described above may be performed remotely by, for example, simulation system140. In these embodiments, display system102receives a virtual representation generated by simulation system140. Relatedly, in these embodiments, force controller112and speakers132may receive force and audio information, respectively, that is calculated by simulation system140.
FIGS. 2A-Care diagrams showing how multiple virtual vehicles are displayed on one or more displays, respectively, according to some embodiments. For ease of explanation,FIGS. 2A-Cwill be described with respect to the elements, e.g., display system102and vehicle operator114, ofFIG. 1. According to some embodiments, the virtual vehicles can be displayed according to an augmented reality embodiment or a full-rendering embodiment, each of which will be further described below.
In the augmented reality embodiment, view202A and204A ofFIG. 2Acan be augmented with virtual vehicles228and230being output on respective displays220and222ofFIG. 2Cto enable vehicle operator114to see portions of virtual vehicles212and214shown in respective views202B and204B ofFIG. 2B. In some embodiments, display220and display222may correspond to front windshield120and rear-view mirror122as described with respect toFIG. 1
As shown inFIG. 2A, view202A shows the field of view of vehicle operator114who may see, via displays220and222, a physical vehicle206A on the physical racecourse. Similarly, view204A shows empty space as there are no real vehicles on the physical racecourse viewable by vehicle operator114via display222.
In some embodiments, to simulate one or more virtual vehicles for display on display220and222, display system102identifies one or more positions224and226on displays220and222, respectively. In some embodiments, position224may correspond to a physical position208A in the field of view of vehicle operator114. For example, position224on display220may correspond to a portion of the physical racecourse or a portion of a building or landmark viewable by vehicle operator114(shown as position208A in view202A) and simulated in the virtual racecourse within racing simulation141of simulation system140. Similarly, position26may correspond to a different portion of the physical racecourse or a portion of a building or landmark viewable by vehicle operator114(shown as position210A in view202A).
In some embodiments, as described with respect toFIG. 1, simulation component106may generate a first virtual vehicle228as a first representation and a second virtual vehicle230as a second representation. In some embodiments, simulation component106may generate the first virtual vehicle228based on position224, measurements of eyes117(as described with respect to eye-position detector110), and the kinematics of the first virtual, competitor vehicle. As shown in display220, rendering component107displays first virtual vehicle228to align with position224. Similarly, rendering component107may display second virtual vehicle230to align with position226on display222.
As a result, vehicle operator114may see physical objects and virtual objects as shown in views202B and204B. For example, similar to view202A, view202B shows that vehicle operator114may continue to see physical vehicle206B. However, view202B shows that vehicle operator114may see virtual objects such as virtual vehicle212displayed as virtual vehicle228on display220. Similarly, view204B shows that vehicle operator114may see virtual objects such as virtual vehicle214displayed as virtual vehicle230on display222. Further embodiments are described with respect toFIG. 10.
In the full rendering embodiment, displays220and222can be configured to render both physical and virtual objects for display to enable vehicle operator114to see virtual vehicles alongside physical vehicles. In this embodiment, rendering component107can render and display physical objects such as roads and physical vehicle206A as shown in views202A and204A ofFIG. 2A. As described above with respect to the augmented reality embodiment, simulation component106may generate a first virtual vehicle228as a first representation and a second virtual vehicle230as a second representation for display. In the full rendering embodiment, displays220and222can be configured to display virtual vehicles228and230, respectively, alongside physical objects as shown in views202B and204B seen by vehicle operator114. For example, view202B shows the road, physical vehicle206B, and virtual vehicle212being rendered and displayed. Similarly, view204B shows virtual vehicle214being rendered and displayed. In some embodiments, in the full rendering embodiment, an outward facing camera (with respect to displays220and/or222) can capture a live video feed of the surroundings. In these embodiments, displays220and222can be configured to display the physical objects by displaying each frame of the live video feed. Further, displays220and222can be configured to display virtual objects by overlaying virtual vehicles onto each displayed frame.
FIGS. 3A-Dare diagrams showing how visible portions of virtual vehicle322and visible portions of virtual vehicle324are displayed on a display320, according to some embodiments. For ease of explanation, diagram300will be described with respect to the elements, (e.g., display system102, vehicle operator114, and simulation system140) ofFIG. 1.FIG. 3Ashows an example real-world view302A that may be observed by vehicle operator114through conventional displays.FIG. 3Bshows an example virtual rendering332of real-world view302A to include virtual vehicles343and344.FIG. 3Cshows an example of display320for displaying visible portions322and324of virtual vehicles343and344, respectively. Display320may correspond to a display implemented within a visor, a helmet (e.g., helmet116), or other headgear worn by an operator (e.g., vehicle operator114) sitting within and driving a physical vehicle (e.g., physical vehicle101).FIG. 3Dshows an example augmented view302B that may be observed by vehicle operator114through display320.
As shown inFIG. 3A, real-world view302A shows the field of view of vehicle operator114if virtual vehicles were not displayed, i.e., via conventional displays. As shown in real-world view302A, vehicle operator114may see, through display320, other physical vehicles such as physical vehicle310A on a physical racecourse as well as physical objects within physical vehicle101. For example, such physical objects may include, without limitation, rearview mirror304A, vehicle frame306A, windshield wipers308A, dashboard312A, etc. In some embodiments, physical objects may include the hands or arms of vehicle operator114. Additionally, as shown in real-world view302A, vehicle operator114may see a shadow of physical vehicle310A. In some embodiments, vehicle operator114may see physical vehicles and physical objects through display320because display320can be a transparent or semi-transparent display.
As discussed above with respect toFIG. 1, kinematics information of physical vehicle101(e.g., position information) and a position of the operator's point of view may be transmitted to simulation system140configured to provide visible portions of virtual vehicles313and314. In some embodiments, based on the kinematics information and the position of the operator's point of view, simulation system140can calculate a virtual world to include virtual vehicles and a representation of physical vehicle101racing against each other on a virtual racecourse corresponding to the physical racecourse seen by operator114. In some embodiments, simulation system140can calculate representations of the various physical objects within the virtual world.
In some embodiments, to enable simulation system140to track and calculate representations of the hands or arms of vehicle operator114, vehicle operator114can wear gloves that embed one or more sensors (e.g., accelerometers, position sensors, etc.) that transmit position-related measurements to simulation system140. Based on sensor measurements, e.g., position or acceleration information, simulation system140can calculate corresponding representations of arms or hands (not shown) in the virtual world.
In some embodiments, to enable simulation system140to track and calculate representations of the hands or arms of vehicle operator114, one or more cameras can be mounted in the physical vehicle being operated by vehicle operator114. The one or more cameras may track positions of the arms and hands based on markers embedded in or displayed on gloves or tracksuit worn by vehicle operator114. For example, markers may include specific colors, patterns, materials, etc. In these embodiments, the one or more camera may transmit the captured information to simulation system140that calculates the corresponding representations of arms or hands (not shown) in the virtual world.
As shown inFIG. 3B, simulation system140may calculate, within the virtual world, a virtual rendering332of real-world view302A. In virtual rendering332, simulation system140can calculate a representation340of physical vehicle310A and virtual vehicles343and344. Additionally, simulation system140can calculate representations334,336,338, and342of corresponding physical objects: rearview mirror304A, vehicle frame306A, windshield wipers308A, and dashboard312A. As shown in virtual rendering332, simulation system140can exclude calculating representations of physical objects that do not obstruct view of virtual vehicles343and344. For example, the speedometer and steering wheel as seen by vehicle operator114in real-world view302A may not be calculated by simulation system140in virtual rending332. In some embodiments, as shown in virtual rendering332, simulation system140can calculate shadows of physical vehicle340and virtual vehicles343and344.
In some embodiments, simulation system140can calculate portions of virtual vehicles322and324to display on display320ofFIG. 3Cto enable vehicle operator114to compete against virtual drivers in the real world. In some embodiments, a visible portion of a virtual vehicle from the position of the point of view of vehicle operator114is that portion of the virtual vehicle that is not obstructed by objects in the virtual world from a virtual position in the virtual world corresponding to the position of the point of view in physical vehicle101. In some embodiments, the simulation system can convert the position of the point of view of vehicle operator114to virtual coordinates within the virtual world. For example, from the point of view of an operator of the physical vehicle, the corresponding virtual position in the virtual world would be inside a representation of physical vehicle101in the virtual world. From the corresponding virtual position of the point of view of physical operator114in the virtual world, a view of the virtual vehicle may be obstructed by the simulated physical vehicle (for example, representations of vehicle frame306A or windshield wipers308A), other simulated physical vehicles (e.g., a representation of physical vehicle310A), shadows, simulated trees and other stationary objects, the simulated racecourse (e.g., when the virtual vehicle is in a dip and partially obstructed by the course itself, etc.) The visible portion of the virtual vehicle is then the unobstructed view of the virtual vehicle. Further embodiments are described with respect toFIGS. 6 and 9.
For example, as shown in virtual rendering332inFIG. 3B, the simulated view of virtual vehicle343shows that portions of virtual vehicle343are obstructed by a representation336of vehicle frame306A. Similarly, in virtual rendering332, the simulated view of virtual vehicle343shows that portions of virtual vehicle344are obstructed by a representation340of physical vehicle310A and a representation338of windshield wiper308A.
In some embodiments, simulation system140may calculate visible portions of a virtual vehicle322to be portions of virtual vehicle343in virtual rendering332that are not obscured by a representation of car frame336in the virtual world. Similarly, simulation system140may calculate visible portions of a virtual vehicle324to be portions of virtual vehicle344in virtual rendering332that are not obscured by representations340and338of physical vehicle310A and windshield wiper308A, respectively. In some embodiments, information related to these calculated visible portions322and324can be transmitted to components within physical vehicle101and displayed by display320.
In some embodiments, as shown inFIG. 3C, components within physical vehicle101, such as display system102, can display visible portions322and324of virtual vehicles343and344, respectively, on display320. In some embodiments, visible portions322and324can include shadows of virtual vehicles343and344as calculated and shown in virtual rendering332ofFIG. 3B. In some embodiments, by augmenting real-world view302A with visible portions322and324being displayed on display320, display system102enables vehicle operator114to see both physical vehicles in the real world and virtual vehicles.
In some embodiments, augmented view302B inFIG. 3Dshows the field of view of vehicle operator114once display320displays (e.g., as rendered by display system102) visible portions of virtual vehicles322and324. For example, similar to view302A, vehicle operator114may still see, via display320, various physical objects on the racecourse in the real world. For example, as shown in augmented view302B, vehicle operator114may still see rearview mirror304B, vehicle frame306B, windshield wiper308B, physical vehicle310B, and dashboard312B. Additionally, vehicle operator114may see virtual vehicles313and314being displayed. In some embodiments, virtual vehicles313and314seen by vehicle operator114respectively correspond to visible portions of virtual vehicles322and324being displayed on display320, as described above. In some embodiments, the techniques described with respect toFIGS. 2A-Ccan be combined with the techniques described with respect toFIGS. 3A-D.
FIGS. 4A-Dare diagrams showing how visible portions of virtual vehicle422are displayed on a display420, according to some embodiments.FIG. 4Ashows an example real-world view402A that may be observed by a spectator or imaged by a video camera.FIG. 4Bshows an example virtual rendering430of real-world view402A to include virtual vehicle434.FIG. 4Cshows an example of display420for displaying visible portions422of virtual vehicle434. Display420may correspond to a display implemented within a visor, a helmet (e.g., helmet116), or other headgear worn by a viewer (e.g., an audience member) present at a physical racecourse.FIG. 4Dshows an example augmented view402B that may be observed by an audience member or imaged by a video camera through display420.
As shown inFIG. 4A, real-world view402A shows the field of view of the viewer if virtual vehicles were not displayed, i.e., via conventional displays. As shown in real-world view402A, the viewer may see, via display420, other viewers404A and various physical objects in the real world. For example, such physical objects may include fence406A, public announcement (PA) horn speakers408A, and physical vehicles410A and412A, and shadows411A and413A of respective physical vehicles410A and412A. In some embodiments, the viewer may see physical vehicles and physical objects through display420because display420can be a transparent or semi-transparent display.
In some embodiments, in addition to fixed objects like fences or walls, the physical objects discussed above that may obstruct the viewer's field of view of the racecourse can include non-stationary objects whose positions may change over time. For example, such non-stationary objects may include the heads of audience members or the bodies of audience members when they stand. In some embodiments, to enable simulation system140to accurately calculate representations of both fixed and non-stationary objects, the viewer's headset can include a camera facing the racecourse and capturing a portion of the racecourse. The lines and borders of the racecourse or other markers placed on the racecourse can be detected by the camera to determine whether one or more physical objects are obstructing the viewer's view of the racecourse. For example, the camera can detect omitted portions or breaks in the edges, lines, or markers of the racecourse. In some embodiments, information about the breaks or omitted portions can be transmitted to simulation system140. In some embodiments, simulation system140can determine which portions of the virtual vehicle are obstructed by physical objects by determining an overlapping portion of the virtual vehicle with the one or more breaks. Since a break indicates that the viewer's view is being blocked by a physical object, the simulation system140can set the alpha values of pixels in the overlapping portion to “zero percent” (in RGBA) to make these overlapping portions transparent.
In some embodiments, information related to the viewer's point of view may be transmitted to a simulation system (e.g., simulation system140or simulation system540ofFIG. 5) configured to provide visible portions of virtual vehicle422to the viewer. For example, such information may include a position of the viewer's point of view. As described with respect toFIG. 6, the simulation system can calculate a virtual world to include virtual vehicles and representations of physical vehicles racing against each other on a virtual racecourse. In some embodiments, simulation system140can calculate representations of the various physical objects (e.g., PA speakers408A) within the virtual world.
As shown inFIG. 4B, the simulation system may calculate, within the virtual world, a virtual rendering430of real-world view402A. In virtual rendering430, the simulation system can calculate virtual vehicle434and representations440and432of physical vehicles410A and412A. In some embodiments, virtual vehicle434as calculated by the simulation system can include a calculated shadow435. Similarly, representations440and432of physical vehicles410A and412A can be calculated to include respective shadows441and433. In some embodiments, the simulation system calculates shadows441and433in the virtual world to correspond to respective shadows411A and413A as seen by the viewer in real-world view402A. Additionally, the simulation system can calculate representations436and438of corresponding physical objects: fence406A and PA speakers408A. As shown in virtual rendering430, the simulation system can exclude calculating representations of physical objects that do not obstruct view of virtual vehicle422. For example, audience (e.g., viewers404A) in real-world view402A may not be calculated by the simulation system in virtual rending430.
In some embodiments, the simulation system can calculate portions of virtual vehicle422to display on display420ofFIG. 4Cto enable the viewer to see a race between physical vehicles410A and412A and virtual vehicles such as virtual vehicle434being simulated in the virtual world. In some embodiments, a visible portion of a virtual vehicle from the position of the viewer's point of view is that portion of the virtual vehicle that is not obstructed by objects in the virtual world from a virtual position in the virtual world corresponding to the position of the viewer's point of view. In some embodiments, the simulation system can convert the position of the viewer's point of view to virtual coordinates within the virtual world. From the corresponding virtual position of the viewer's point of view in the virtual world, a view of the virtual vehicle may be obstructed by simulated physical vehicles (for example, representations of physical vehicles410A and412A) and other simulated objects (e.g., representations of fence406A and horn speakers408A), simulated trees, the simulated racecourse (e.g., when the virtual vehicle is in a dip and partially obstructed by the course itself, etc.). The visible portion of the virtual vehicle is then the unobstructed view of the virtual vehicle. Further embodiments are described with respect toFIGS. 6 and 9.
For example, as shown in virtual rendering432inFIG. 4B, the simulated view of virtual vehicle434shows that portions of virtual vehicle434are obstructed by a representation438of PA speaker408A and a representation432of physical vehicle412A.
In some embodiments, the simulation system may calculate visible portions of virtual vehicle422to be portions of virtual vehicle434that are not obscured by representation of PA horn speakers438in virtual rendering430and that are not obscured by representation of physical vehicle432in the virtual world. In some embodiments, information related to the calculated visible portions422can be transmitted to the viewer and displayed by display420.
In some embodiments, as shown inFIG. 4C, components worn by the viewer, such as display system592ofFIG. 5, can display visible portions422of virtual vehicle434on display420. As shown inFIG. 4C, visible portions422can include car frame and details424and a shadow426of virtual vehicle434. In some embodiments, visible portions422can include a shadow428being cast on virtual vehicle434as calculated in virtual rendering430. For example, shadow428may be a shadow cast by representation432of physical vehicle412A being calculated in virtual rendering430. In some embodiments, by augmenting real-world view402A with visible portion422being displayed on display420, a display system (e.g., display system592) enables the viewer to see both physical vehicles in the real world and virtual vehicles.
In some embodiments, augmented view402B inFIG. 4Dshows the field of view of the viewer once display420displays (e.g., as rendered by display system592) visible portions of virtual vehicle422. For example, similar to view402A, the viewer may still see, via display420, various physical objects on the racecourse in the real world. For example, as shown in augmented view402B, the viewer may still see other viewers404B, fence406B, public announcement (PA) horn speakers408B, and physical vehicles410B and412B as well as their respective shadows411B and413B. Additionally, the viewer may see virtual vehicle414and shadow415of virtual vehicle414being displayed. In some embodiments, virtual vehicle414seen by the viewer can correspond to visible portions of virtual vehicle422being displayed on display420, as described above. In the example of augmented view402B, virtual vehicle414seen by the viewer is obstructed by PA horn speakers408B and physical vehicle412B. In some embodiments, virtual vehicle414being displayed can overlap portions of shadow413A as may be seen by the viewer in real-world view402A. As a result and as shown by shadow413B in augmented view402B, portions of shadow413A may be obscured by visible portions of virtual vehicle422. In some embodiments, visible portions of virtual vehicle422can include shadow428of physical vehicle412A. In these embodiments and as shown in augmented view402B, the viewer can see shadow416being cast on virtual vehicle414.
In some embodiments, augmented view402B can be provided to viewer via a non-transparent display based on a full rendering technique as described with respect toFIGS. 2A-C. In these embodiments, a camera coupled to a viewer's headset can capture one or more video frames of the viewer's field of view as seen in real-world view402A. The simulation system can similarly calculate virtual vehicle434within virtual rendering430. However, in these embodiments, instead of displaying only visible portions of virtual vehicle422on display420, a display implemented in the viewer's headset can be configured to output the one or more captured video frames with visible portions of virtual vehicle422overlaid thereon. In some embodiments, physical objects (e.g., fences, beams, infrastructure, etc.) may block the viewer's view of the racecourse. Because augmented view402B may be fully rendered views, the display may overlay both visible vehicles and representations of physical vehicles (as calculated by the simulation system) on top of the captured video frames such that large physical objects do not block the viewer's view of the race for extended periods of time.
In some embodiments, as will be further described with respect toFIGS. 5, 6, and 9, real-world view402A may correspond to a live video feed captured by a camera (e.g., video camera580ofFIG. 5) installed at a physical racecourse. In these embodiments, information related to a point of view of the camera may be transmitted to the simulation system. For example, such information may include one or more of the camera's position, orientation, tilt, or degree of rotation. Based on the information of the camera's point of view, the simulation system may calculate a virtual world to include virtual rendering430to include virtual vehicle, representations of physical vehicles, and representations of physical objects, as described with respect toFIG. 4B. In some embodiments, based on virtual rendering430, the simulation system can calculate visible portions of virtual vehicle422as would be viewable at the position of the camera's point of view. As described with respect toFIG. 4C, visible portions of virtual vehicle422may include car frame and details424, shadow426of the virtual vehicle, or shadow428being cast by other objects in virtual rendering430. In some embodiments, visible portions of virtual vehicle422may be overlaid on the live video feed of real-world view402A to display a race between physical vehicles and virtual vehicles, as shown in augmented view402B and as described above with respect toFIG. 4D.
FIG. 5is a system500for simulating a virtual race between a physical vehicle510and a simulated entity530, according to some embodiments. In some embodiments, a network502communicatively couples various components: physical vehicle510, simulated entity530, simulation system540, viewing devices560, content-delivery system570, camera580, and viewer590. In some embodiments, camera580can be coupled to content-delivery system570. As shown in system500, network502may be a conduit for data flows between the various components. Network502can be a wired and/or wireless network that includes any combination of local area networks (LANs), wide area networks (WANs), the Internet, and the like.
In some embodiments, simulation system540include a number of engines552that operate on models542and information received from physical vehicle510and simulated entity530to simulate the virtual race, represented as racing simulation550. Further, simulation system540includes RF circuitry548, which may include similar components as RF circuitry105described with respect toFIG. 1, for communicating data (e.g., graphics data, kinematics information, force information, audio information, etc.) with physical vehicle510, viewer590, and simulated entity530.
In some embodiments, models542include vehicle models544and racecourse models546. Vehicle models544may include 3-D models of animate or inanimate objects in the virtual environment of racing simulation550. For example, vehicle models544may include a 3-D model of physical vehicle510as well as a 3-D model of a virtual vehicle corresponding to simulated entity530. Racecourse models546may include 2-D or 3-D models of the physical racecourse on which physical vehicle510is operating. In some embodiment, the 2-D or 3-D models may include information related to terrains, boundaries, or topological features and the like. Racecourse models546may include a racecourse and related characteristics (e.g., terrain, material type, length, etc.), stands for an audience, etc.
In some embodiments, to generate and maintain racing simulation550, engines552include a physics engine554, a graphics engine556, and an audio engine558. Physics engine554may include algorithms for emulating the laws of physics realistically within racing simulation550. In particular, physics engine554includes algorithms to control how components, e.g., simulated physical vehicles or simulated virtual vehicles, interact with each other and a virtual racecourse in racing simulation550. In some embodiments, as described elsewhere herein, physics engine554generates and maintains racing simulation550based on kinematics information received from physical vehicle510and based on inputs received from simulated entity530. As will be further described with respect toFIGS. 6 and 9, the kinematics information can include position information of physical vehicle510, according to some embodiments. For example, physics engine554may generate an avatar of physical vehicle510within racing simulation550based on a corresponding model in vehicle models544where a position of the avatar in racing simulation550may be calculated based on the received kinematics information. Additionally, physics engine554may generate kinematics information of a virtual vehicle corresponding to simulated entity530based on the inputs received from simulated entity530. Using the generated kinematics information and vehicle models544, physics engine554may simulate the virtual vehicle on the virtual racecourse within racing simulation550.
In some embodiments, to enable physical vehicle510to simulate one or more virtual vehicles on display512, simulation system540transmits the kinematics information of the virtual vehicle to simulation component522or display system514.
In some embodiments, as described with respect to simulation system140ofFIG. 1, physics engine554(within simulation system540) further calculates force information based on an interaction simulated between the physical vehicle and the virtual vehicle in racing simulation550. In some embodiments, physics engine554calculates force information for physical vehicle510and force information for simulated entity530. Then, simulation system540may transmit the calculated force information to physical vehicle510, simulated entity530, or both via RF circuitry548.
In some embodiments, to enhance the realism of the race experienced at physical vehicle510and simulated entity530, audio engine558(within simulation system) include algorithms for calculating sounds within racing simulation550. Audio engine558may include sound files related to engines, brakes, tires, explosions, as well as collisions between vehicles. In some embodiments, audio engine558calculates a volume of a generated sound based on a distance between vehicles as calculated by physics engine554to generate racing simulation550. Then, audio engine558may transmit audio information to physical vehicle510, simulated entity530, or both.
In some embodiments, graphics engine556generates 3-D animated graphics for racing simulation550. For example, graphics engine556may utilize specialized hardware for rendering vehicles (e.g., an avatar of physical vehicle510or a virtual vehicle corresponding to simulated entity530) based on vehicle models544and computations from physics engine554. Further, graphics engine556may render a virtual racecourse within racing simulation550based on racecourse models546. In some embodiments, the graphics information (e.g., vehicles and racecourse) generated by graphics engine556can be transmitted to physical vehicle510, simulated entity530, or a combination thereof via RF circuitry548.
Graphics engine556may utilize techniques such as rasterization or ray-tracing for generating the 3-D animated graphics. In some embodiments, graphics engine556includes computer software applications that are programmed and compiled to be executed on one or more processors of simulation system540. In other embodiments, graphics engine556can be built upon graphics application programming interfaces (APIs) such as Direct3D or Open GL.
In some embodiments, physical vehicle510corresponds to physical vehicle101described with respect toFIG. 1. To simulate a virtual vehicle, corresponding to simulated entity530, at physical vehicle510, physical vehicle510includes one or more of the following components: display512, display system514, telemetry system520(which may include sensors516), force controller518, and simulation component522. These components may correspond to the similarly named components described with respect toFIG. 1. In general, telemetry system520can be two-way telemetry systems that receive and transmit data. For example, telemetry system520may transmit data, e.g., kinematics information of physical vehicle510, monitored by sensors516to simulation system540via network502. In some embodiments, the data includes a position of physical vehicle510on a physical racecourse. In some embodiments, the data captured by sensors516may include a position of a point of view of an operator of physical vehicle510, as will be further described with respect toFIGS. 6 and 9.
In some embodiments, telemetry system520can receive kinematics information of the virtual vehicle from simulation system540as described above. Telemetry system520may route the received kinematics information to display system514. Based on the received kinematics information, display system514may generate a virtual representation of the virtual vehicle that is processed by display system514for display on display512. By simulating the virtual vehicle within a field of view of the operator of physical vehicle510, system500enables the operator to feel as if the simulated vehicle is in physical proximity to the operator. As described above, the processing for generating the virtual representation of the virtual vehicle may be performed remotely, e.g., offloaded to simulation system540. In these embodiments, the virtual representation can be generated and transmitted by simulation system540to display system514. Display system514may then display the virtual representation on display512to simulate the virtual vehicle within the field of view of the operator.
In some embodiments, as will be further described with respect toFIGS. 6 and 9, to enhance the realism of the virtual representation, the generated virtual representation can include a portion of the virtual vehicle that is visible from the position of the point of view of the operator. In some embodiments, simulation system540can generate the visible portion. In some embodiments, simulation component522onboard physical vehicle510can generate the visible portion.
In some embodiments, to further enhance the realism of the race as experienced by the operator of physical vehicle510, force controller518receives force information to control one or more mechanical elements in physical vehicle510. In some embodiments, as described above, the force information as calculated by physics engine554is received from simulation system540. In other embodiments, the force calculation may be performed on-board physical vehicle510.
In some embodiments, viewer590may correspond to an audience member watching the race between physical vehicle510and simulated entity530. To simulate a virtual vehicle for display to viewer590, viewer590may wear or operate one or more devices that implement one or more of the following components: display594, display system592, telemetry system596(which may include sensors598), and simulation component599. These components may correspond to the similarly named components described with respect to physical vehicle510. In general, telemetry system596transmits limited kinematics information of viewer590as detected by sensors598. For example, the limited kinematics information may include a position (e.g., GPS position) of viewer590. As viewer590is likely associated with limited motion, other types of kinematics information related to motion may not need to be captured by sensors598, according to some embodiments. In some embodiments, like sensors516, sensors598may include cameras for capturing a position of a point of view of viewer590. In some embodiments, display594can be implemented in one or more devices worn by viewer590. For example, display594may be implemented in a headset (e.g., a helmet) or a visor worn by viewer590. In some embodiments, viewer590may wear or operate a device that implements simulation component599. Similar to the functionality of simulation component522, simulation component599may process some of the computations performed by simulation system540.
In some embodiments, simulation system540communicates with content-delivery system570to display the competition between live and simulated participants, e.g., physical vehicles and virtual vehicles, to an audience via viewing devices560. For example, the virtual vehicle may be overlaid onto a real video footage from the perspective of a video camera580at the physical racecourse, and the combined video footage may be shown on one or more viewing devices560such that the audience which would see the physical vehicle and virtual vehicle on the same racecourse in competition.
In some embodiments, content-delivery system570includes: video server572for broadcasting video content via a cable or television network; and web server574for transmitting video content on-demand or via live streaming via network502, e.g., the Internet. As discussed above, video server572can broadcast video content obtained via video camera580. In some embodiments, a plurality of video cameras may be present at the physical racecourse to record live video footage of the race from different points of views. In these embodiments, video server572can select live video footage captured by one video camera (e.g., video camera580) from the plurality of video cameras. In some embodiments, each video camera comprises its own point of view, and each point of view is used to determine the visible portion of the virtual vehicle for combining with the live image feed from each video camera.
Though video server572and web server574are shown as being implemented by content-delivery system570, one or more of these servers (e.g., video server572and web server574) may be implemented by separate entities or by simulation system540.
In some embodiments, viewing devices560include a variety of electronic devices with displays for presenting video data. For example, audiences attending a live event/competition may watch the competition on viewing devices560that include television (TV) screens, jumbotrons, or the like. In another example, audiences at home may operate viewing devices560such as TVs, laptops, tablets, smartphones, desktop computers, among other types of mobile devices. Further, in some embodiments, audiences both at home and at the live competition may wear viewing devices560such as HMDs or goggles that would display the combined scene (e.g., including live and virtual participants) from the perspective of the audience member based on location-based information of where the audience member is located, as well as head and eye spatial and directional measurements to accurately recreate the scene using processors built into the HMDs, on portable computers or mobile devices with the audience, or from remote servers with camera and audience member positional locations registered in them and which stream the display information to the audience member's HMD or portable computer/mobile device.
In some embodiments, simulated entity530includes a simulation device532coupled to a display534and an input controller536for controlling the virtual vehicle in a virtual race. In some embodiments, simulation device532includes force controller538and display system539. Simulation device532may be a general-purpose computer or a special-purpose computer such as a videogame console.
In some embodiments, input controller536may include a keyboard, a videogame controller, a joystick, a steering wheel and pedals, a force pad, a treadmill, a steering wheel, among other types of input devices for controlling the virtual vehicle in a virtual race.
In some embodiments, simulation device532receives inputs from the input controller536and transmits the received inputs to simulation system540. In some embodiments, simulation system540simulates the virtual vehicle on the virtual racecourse based on the inputs. Then, simulation system540may transmit display information corresponding to a position of the virtual vehicle on the virtual racecourse. Display system539within simulation device532may receive the display information and render the virtual race as computer-generated imagery (CGI) on display534. In some embodiments, display system539projects the virtual race on display534.
As shown in system500, display534may include a television screen, a monitor, a projector, or other devices for displaying graphics data. In some embodiments, as described with respect to displays ofFIG. 1, display534may include a head-mounted display (HMD) worn by a user of input controller536. For example, the HMD may be include a visor, a headset (e.g., a helmet), glasses, goggles, or other devices worn in front of the user's eyes.
In other embodiments, the HMD implements retinal projection techniques to simulate one or more virtual vehicles. For examples, the HMD may include a virtual retinal display (VRD) that projects images onto the left and right eyes of vehicle operator114to create a three-dimensional (3D) image of one or more virtual vehicles in the field of view of vehicle operator114.
In some embodiments, force controller538receives force information from simulation system540. The force information may be associated with an interaction, e.g., a collision, simulated by simulation system540between the virtual vehicle and the physical vehicle on the virtual racecourse. To enhance the virtual racing experience for simulated entity, force controller538may provide feedback to input controller536by, for example, by vibrating input controller536. In some embodiments, a user operating input controller536may wear a haptic suit including one or more actuators controlled by force controller538to emulate the physical sensations that would be felt by the user in a real collision.
FIG. 6is a flowchart illustrating a method600for displaying virtual vehicles on displays, according to some embodiments. In some embodiments, method600includes steps performed at a physical vehicle602, a simulation system604, and a viewer606. For example, steps performed at physical vehicle602may be implemented by components within a physical vehicle such as physical vehicle101ofFIG. 1or physical vehicle510ofFIG. 5. For example, steps performed at simulation system604may be implemented by simulation system140ofFIG. 1or simulation system540ofFIG. 5. For example, steps performed at viewer606may be performed by devices (e.g., components shown in viewer590ofFIG. 5) worn by an audience member watching a race between physical vehicle602and a virtual vehicle on a physical racecourse.
In step610, physical vehicle602identifies a position of physical vehicle602. In some embodiments, physical vehicle602can identify the position of physical vehicle602by detecting kinematics information of physical vehicle602via one or more sensors (e.g., sensors108ofFIG. 1or sensors516ofFIG. 5) on board physical vehicle602. In some embodiments, the position of physical vehicle602includes location information for each of two portions of physical vehicle602. For example, the location information for a first portion of physical vehicle602may be detected by a GPS sensor placed at the first portion. In some embodiments, the position of physical vehicle602includes a location of one portion of physical vehicle602and an orientation of physical vehicle602. In some embodiments, the orientation of physical vehicle602can include gyroscope data detected by a sensor (e.g., a gyroscope) on-board physical vehicle602.
In step611, physical vehicle602provides the position of physical vehicle602to simulation system604.
In step620, simulation system604receives inputs controlling a virtual vehicle. In some embodiments, the inputs can be received from input controller536as described with respect toFIG. 5.
In step621, simulation system604calculates a virtual world for simulating a race on a virtual racecourse between the virtual vehicle and the physical vehicle based on inputs from physical vehicle602and inputs controlling the virtual vehicle. In some embodiments, the virtual world can be implemented in a racing simulation (e.g., racing simulation141ofFIG. 1or racing simulation550ofFIG. 5). In some embodiments, the inputs from physical vehicle602can include the position of physical vehicle602provided in step611. In some embodiments, to generate the racing simulation, simulation system604can calculate a representation of the physical vehicle to add to the virtual racecourse in the racing simulation based, in part, on the position provided in step611. In some embodiments, to generate the racing simulation, simulation system604can calculate the virtual vehicle to add to the virtual racecourse in the racing simulation based on the inputs of step620. In some embodiments, simulation system604can use the inputs to update kinematics information associated with the virtual vehicle in the virtual world. In some embodiments, the kinematics information includes one or more vectors of motion, one or more scalars of motion, a position vector, a GPS location, a velocity, an acceleration, an orientation, or a combination thereof of the virtual vehicle. Based on the updated kinematics information, simulation system604can update a simulation of the virtual vehicle in the virtual world, according to some embodiments.
In step612, physical vehicle602identifies a position of a point of view of an operator of physical vehicle602. In some embodiments, the position of the point of view of the operator can be determined with respect to a head of the operator as detected by a sensor (e.g., a camera) in physical vehicle602. For example, the position of the point of view can include detecting a spatial position of the operator's head. For example, in some embodiments, the position of the operator's point of view can be determined by at least one of the following: tracking head position, identifying a vector from a point on the head to a fixed point on the physical vehicle, identifying a vector from a point on a head gear to a fixed point on the physical vehicle, identifying a vector from a point on the head to a fixed point in a venue, or identifying a vector from a point on a head gear to a fixed point in the venue. In some embodiments, the venue may include the physical racecourse, the stands, or other infrastructure at the physical racecourse.
In some embodiments, the position of the point of view of the operator can be determined with respect to eyes of the operator as detected by a sensor (e.g., a camera) in physical vehicle602. For example, the position of the point of view can include detecting a spatial position of a user's eyes, a gaze direction of the user's eyes, or a focus point of the user's eyes where the user is the operator. For example, in some embodiments, the position of the operator's point of view can be determined by at least one of the following: measuring a point of gaze of eyes, tracking eye movement, identifying a vector from one or both eyes to a fixed point on the physical vehicle, identifying a vector from a point on eye-wear (e.g., a visor) to a fixed point in a venue, identifying a vector from one or both eyes to a fixed point on the racecourse, or identifying a vector from one or both eyes to a fixed point in the venue. In some embodiments, the venue may include the physical racecourse, the stands, or other infrastructure at the physical racecourse. In some embodiments, the position of the point of view of the operator can be identified by measuring light reflection or refraction from the eyes.
In step613, physical vehicle602provides a position of the point of view of the operator to simulation system604. In some embodiments, physical vehicle602wirelessly transmits the position of the point of view to simulation system604. In some embodiments, a telemetry system (e.g., telemetry system104ofFIG. 1) coupled to physical vehicle602can perform the transmitting.
In step622, simulation system604calculates a first portion of the virtual vehicle visible from the position of the point of view of the operator. In some embodiments, as will be further described with respect toFIG. 9, simulation system604calculates the first portion by determining which portions of the virtual vehicle are unobscured by the representation of the physical vehicle (with respect to a virtual position within the racing simulation generated in step621). In particular, the simulation system604can determine the virtual position to correspond to the position of the point of view provided by physical vehicle602as described in step613. In some embodiments, the simulation system604calculates the first portion by determining which portions of the virtual vehicle are unobscured by the representation of physical objects in the racing simulation. In some embodiments, as described above with respect toFIGS. 3A-D, portions of the virtual vehicle within the racing simulation may be obstructed by representations of other physical or virtual vehicles with respect to the virtual position of the point of view of the operator. For example, a representation of a physical object may obscure a portion of the virtual vehicle from a virtual position of the point of view when the representation of the physical object is positioned on a straight line between the virtual position of the point of view and the obscured portion of the representation of the physical object. In some embodiments, the first portion calculated by simulation system604can include the unobstructed portion of the virtual vehicle, as described above. In some embodiments, the first portion can exclude the obstructed portions of the virtual vehicle, as described above.
In some embodiments, the first portion of the virtual vehicle can include one or more virtual shadows being generated within the virtual world. In some embodiments, the one or more virtual shadows can include a virtual shadow of the virtual vehicle, a virtual shadow being cast on the virtual vehicle, or both the virtual shadow of the virtual vehicle and the virtual shadow being cast on the virtual vehicle. For example, the virtual shadow being cast on the virtual vehicle may include a virtual shadow of another virtual vehicle, a virtual shadow of a representation of the physical vehicle in the virtual world, or a virtual shadow of other virtual objects being generated in the virtual world.
In some embodiments, the first portion of the virtual vehicle can include a virtual representation as generated by simulation system604(e.g., graphics engine556). In some embodiments, the virtual representation includes a set of graphical elements. In some embodiments, the virtual representation can be generated by simulation system604based on a digital 3-D model of the virtual vehicle stored in a database of models (e.g., vehicle models544).
In step623, simulation system604outputs the first portion calculated in step622to physical vehicle602. In some embodiments, simulation system604wirelessly transmits (e.g., via RF circuitry548ofFIG. 5) the first portion to physical vehicle602.
In step614, physical vehicle602provides the first portion to a display system (e.g., display system102ofFIG. 1or display system514ofFIG. 5). In some embodiments, the first portion received from simulation system604can include kinematics information calculated by simulation system604. In some embodiments, the first portion received from simulation system604can include graphical information. In some embodiments, the telemetry system (e.g., telemetry system104ofFIG. 1) coupled to physical vehicle602can receive information related to the first portion of the virtual vehicle.
In step615, physical vehicle602displays the first portion of the virtual vehicle on a display (e.g., display512ofFIG. 5) proximate to physical vehicle602. In some embodiments, the display system (e.g., rendering component107ofFIG. 1) renders the first portion of the virtual vehicle on the display. In some embodiments, a rendering component (e.g., rendering component107ofFIG. 1) in the display system translates the first portion of the virtual vehicle into a virtual representation for displaying on the display. In some embodiments, the virtual representation includes a set of graphical elements. In some embodiments, the display system displays a series of representations of the virtual vehicle (each representation including a visible portion of the virtual vehicle output in step623) over a period of time by repeating one or more steps (e.g., steps610-615and620-623) of method600to simulate a trajectory of the virtual vehicle on the racecourse in the field of view of the operator.
In some embodiments, the display system includes a simulation component (e.g., simulation component106) that generates the virtual representation. In some embodiments, the virtual representation is generated based on a digital 3-D model of the virtual vehicle. In some embodiments, the digital 3-D model is stored in memory of the display system. The digital 3-D model may be received from, for example, the simulation system.
In some embodiments, the display includes one or more windows of physical vehicle602. In some embodiments, the display can include one or more windows or mirrors of physical vehicle602such as any of the following displays as described with respect toFIG. 1: front windshield120, rear-view mirror122, rear windshield124, side windows126A and126B, side mirrors128A and128B. In some embodiments, the display can be implemented within a visor or a headset (e.g., helmet116ofFIG. 1) worn by the operator physical vehicle602.
In some embodiments, simulation system604performs steps similar to steps622and623to enable other viewers such as viewer606to view the virtual vehicle from other points of view. In some embodiments, viewer606can be an audience member at a live racing event and watching a race between physical vehicle602on a physical racecourse and the virtual vehicle not physical present on the physical racecourse.
Although illustrated together in one system, in some embodiments one of the first visible portion and second visible portion are calculated and output without calculating and output the other portion.
In step630, a display system of viewer606receives a selection for a second point of view. In some embodiments, the selection can be a point of view of viewer606. For example, viewer606may be an audience member present at the physical racecourse and observing physical vehicle602on the racecourse. In some embodiments, the selection can be a point of view of a video camera present at the physical racecourse and imaging a portion of the racecourse on which physical vehicle602is racing. When physical vehicle602is traveling on the portion of the racecourse being captured by the video camera, the video camera may image physical vehicle602on a video feed. When the physical vehicle is not travelling across the portion of the racecourse being captured by the camera, the camera may still capture the portion of the racecourse. In some embodiments, the selection for the second point of view can default to the point of view of viewer606or a video camera.
In step631, the display system of viewer606identifies a position of the second point of view. In some embodiments where the second point of view is the point of view of viewer606, the position of the second point of view can be determined with respect to the head of viewer606as detected by a sensor (e.g., sensors598) proximate to viewer606. For example, in some embodiments, the position of the second point of view can be determined by at least one of the following: tracking head position of viewer600, identifying a vector from a point on the head to a fixed point in a venue, identifying a vector from a point on a head gear to a fixed point in the venue. In some embodiments, the venue may include the physical racecourse, the stands, or other infrastructure at the physical racecourse.
In some embodiments where the second point of view is the point of view of viewer606, the position of the second point of view can be determined with respect to eyes of viewer606as detected by a sensor (e.g., sensors598) proximate to viewer606. For example, the position of the second point of view can include detecting a spatial position of a user's eyes, a gaze direction of the user's eyes, or a focus point of the user's eyes where the user is viewer606. For example, in some embodiments, the position of the second point of view can be determined by at least one of the following: measuring a point of gaze of eyes, tracking eye movement, identifying a vector from a point on eye-wear (e.g., a visor) to a fixed point in a venue, or identifying a vector from one or both eyes to a fixed point in the venue. In some embodiments, the venue may include the physical racecourse, the stands, or other infrastructure at the physical racecourse. In some embodiments, the position of the second point of view can be identified by measuring light reflection or refraction from the eyes of viewer606.
In step632, the display system of viewer606provides the position of the second point of view to simulation system604.
In step624, simulation system604calculates a second portion of the virtual vehicle visible from the position of the second point of view. In some embodiments, as will be further described with respect toFIG. 9, simulation system604calculates the second portion by determining which portions of the virtual vehicle are unobscured by the representation of the physical vehicle with respect to a virtual position within the racing simulation generated in step621. In particular, the simulation system604can determine the virtual position to correspond to the position of the second point of view provided by the display system of viewer606as described in step632.
In step625, simulation system604outputs the second portion calculated in step624to the display system of viewer606. In some embodiments, simulation system604wirelessly transmits (e.g., via RF circuitry548ofFIG. 5) the second portion to the display system of viewer606.
In step633, a wireless interface proximate to viewer606provides the second portion to the display system of viewer606. In some embodiments, the second portion received from simulation system604can include kinematics information calculated by simulation system604. In some embodiments, the first portion received from simulation system604can include graphical information. In some embodiments where the display system of viewer606includes the wireless interface, the display system of viewer606can directly receive the second portion.
In step634, the display system of viewer606the second portion of the virtual vehicle on a display proximate to viewer606. In some embodiments, the display system of viewer606renders the second portion of the virtual vehicle on the display. In some embodiments, the display proximate to viewer606can be implemented in a visor or a helmet worn by viewer606.
In some embodiments, a non-transitory computer readable storage medium stores one or more programs configured to be executed by one or more processors of an electronic device with a display, the one or more programs including instructions for implementing any of the steps described above with respect toFIG. 6. In some embodiments, a non-transitory computer-readable storage medium comprises computer-readable instructions, which when executed by one or more processors, causes the one or more processors to perform steps described above with respect toFIG. 6. In some embodiments, a system comprises at least one of foregoing non-transitory computer readable storage mediums, and one or more processors configured to execute the instructions of the non-transitory computer readable storage medium(s). In some embodiments, a device comprises one or more processors configured to perform any of the steps described above with respect toFIG. 6.
FIG. 7is a flowchart illustrating a method700for providing two-way interactions between a virtual vehicle and a physical vehicle to an operator of the physical vehicle, according to some embodiments. Method700may, for example, be implemented by components within a physical vehicle such as physical vehicle101ofFIG. 1. In some embodiments, method700enhances method600to provide tactile and audio feedback in addition to the visual feedback displayed and described with respect to method600.
In step702, a telemetry system (e.g., telemetry system104) monitors kinematics information of the physical vehicle (e.g., physical vehicle101). In some embodiments the telemetry system includes one or more sensors to detect the kinematics information. For example, the one or more sensors may include a GPS receiver, an accelerometer, a speedometer, an orientation sensor, a gyroscope, among other types of sensors.
In step704, the telemetry system transmits the kinematics information to a simulation system (e.g., simulation system140). In some embodiments, the kinematics information is transmitted via RF circuitry (e.g., RF circuitry105).
In some embodiments, the simulation system simulates a virtual race between the virtual vehicle and the physical vehicle in a virtual world based on the telemetered kinematics information. In some embodiments, the virtual world includes a virtual racecourse where the virtual vehicle and a representation of the physical vehicle are simulated on the virtual racecourse. In some embodiments, the simulation system calculates a distance between the virtual vehicle and the physical vehicle on the virtual racecourse. Based on the calculated distance, the simulation system determines whether a contact (e.g., a collision) exists between the virtual vehicle and the physical vehicle on the virtual racecourse. Then, the simulation system calculates force information that corresponds to the determined contact.
In some embodiments, the simulation system calculates audio information based on the calculated distance and whether the contact exists. In some embodiments, the audio information includes one or more of the sounds of engines, brakes, tires, explosions, or explosions as well as the volume level of the one or more sounds. For example, the simulation system may calculate the volume of the one or more sounds to be inversely proportional to the calculated distance between the virtual vehicle and the physical vehicle on the virtual racecourse.
In step706, a force controller (e.g., force controller112) receives force information from the simulation system. For example, a display system (e.g., display system102) may receive and forward the force information to the force controller. In some embodiments, some or all of the functionality of calculating the force information may be performed at the physical vehicle in a simulation component (e.g., simulation component106). In these embodiments, the simulation component receives kinematics information of the virtual vehicle or other virtual objects, as will be described with respect to step816ofFIG. 8. Then, the simulation component may perform the force calculations to generate the force information.
In step708, the display component receives audio information from the simulation system. In some embodiments, some or all of the functionality of calculating the audio information may be performed at the physical vehicle in the simulation component. In these embodiments, the simulation component receives kinematics information of the virtual vehicle or other virtual objects, as will be described with respect to step816ofFIG. 8. Then, the simulation component may perform the audio calculations to generate the audio information.
In step710, the force controller controls one or more mechanical elements implemented in the physical vehicle based on the received force information. In some embodiments, the force controller transmits instructions to one or more force actuators (i.e., examples of mechanical elements) to emulate the physical sensations that would be felt by the operator of the physical vehicle should there be real physical contact between the physical vehicle and another vehicle, displayed as a virtual representation of the virtual vehicle. In some embodiments, the one or more force actuators may be implemented within a seat and head brace (e.g., seat and head brace130) or within a haptic suit (e.g., haptic suit118) worn by the operator.
In some embodiments, the mechanical elements may include parts that affect the functionality of the physical vehicle. For example, the mechanical elements may include a steering wheel column, brakes, airbags, etc. Based on the received force information, the force controller may, for example, lock the brakes, deploy the airbags, vibrate the steering wheel column, create a bumping force on a section of the vehicle, slow the car by reducing power, or control other mechanical and/or electrical elements within the physical vehicle.
In step712, the display system can control one or more speakers of the physical vehicle (e.g., speakers132) to output the audio information. In some embodiments, the display system can control one or more speakers within a helmet worn by the operator (e.g., helmet116) to output the audio information.
In some embodiments, a non-transitory computer readable storage medium stores one or more programs configured to be executed by one or more processors of an electronic device with a display, the one or more programs including instructions for implementing any of the steps described above with respect toFIG. 7. In some embodiments, a non-transitory computer-readable storage medium comprises computer-readable instructions, which when executed by one or more processors, causes the one or more processors to perform steps described above with respect toFIG. 7. In some embodiments, a system comprises at least one of foregoing non-transitory computer readable storage mediums, and one or more processors configured to execute the instructions of the non-transitory computer readable storage medium(s). In some embodiments, a device comprises one or more processors configured to perform any of the steps described above with respect toFIG. 7.
FIG. 8is a flowchart illustrating a method800for simulating a race between a virtual vehicle and a physical vehicle to provide two-way interactions, according to some embodiments. Method800may, for example, be implemented by a simulation system such as simulation system140described with respect toFIG. 1or simulation system540described with respect toFIG. 5. As described with respect toFIGS. 1 and 5, the simulation system simulates a virtual race between the virtual vehicle and the physical vehicle on a virtual racecourse in a virtual world where the virtual racecourse is simulated to correspond to a physical racecourse.
In step802, the simulation system receives input from a controller (e.g., input controller536) to control a virtual vehicle on the virtual racecourse. In some embodiments, the controller may be a keyboard, a mouse, a video game controller, a joystick, a steering wheel and pedals, a gesture on a touch screen, or a combination thereof among other types of input devices.
In step804, the simulation system receives kinematics information for the physical vehicle (e.g., physical vehicle101ofFIG. 1). In some embodiments, the kinematics information is received from the physical vehicle as described with respect toFIG. 1.
In step806, the simulation system simulates the virtual race between the virtual vehicle and the physical vehicle on the virtual racecourse. In some embodiments, the simulation system simulates the virtual race according to one or more of steps808-812. In step808, the simulation system determines kinematics information of the virtual vehicle based on the input received in step802. For example, the input may include an amount of force applied to a videogame controller that is translated by the simulation system into an acceleration amount. In step810, the simulation system determines an interaction between the virtual vehicle and the physical vehicle on the virtual racecourse by comparing the kinematics information between the virtual vehicle and the physical vehicle. In some embodiments, the simulation system determines a distance between the virtual vehicle and the physical vehicle simulated on the virtual racecourse to determine whether a contact (e.g., a collision) occurs. In step812, the simulation system generates force information based on the interaction determined in step810. In step813, the simulation system generates audio information based on the interaction determined in step810.
In step816, the simulation system transmits the kinematics information of the virtual vehicle to the physical vehicle. In some embodiments, the physical vehicle uses the kinematics information of the virtual vehicle to generate and display the virtual vehicle on a display of the physical vehicle.
In step818, the simulation system transmits the force information to the physical vehicle as described with respect to step706ofFIG. 7. In some embodiments, the physical vehicle controls one or more mechanical or electrical elements of the physical vehicle based on the force information to emulate the physical sensations that would be felt by an operator of the physical vehicle in a real collision.
In step820, the simulation system transmits generated audio information to the physical vehicle as described with respect to step708ofFIG. 7. In some embodiments, the physical vehicle controls one or more speakers of the physical vehicle based on the audio information to emulate the auditory experience that would be felt by the operator of the physical vehicle should the virtual vehicle be physically present on the physical racecourse. For example, the one or more speakers may include vehicle speakers or speakers implemented within a headset worn by the operator.
In step814, the simulation system renders the virtual race on a simulation display (e.g., display534ofFIG. 5).
In some embodiments, a non-transitory computer readable storage medium stores one or more programs configured to be executed by one or more processors of an electronic device, the one or more programs including instructions for implementing any of the steps described above with respect toFIG. 8. In some embodiments, a non-transitory computer-readable storage medium comprises computer-readable instructions, which when executed by one or more processors, causes the one or more processors to perform steps described above with respect toFIG. 8. In some embodiments, a system comprises at least one of foregoing non-transitory computer readable storage mediums, and one or more processors configured to execute the instructions of the non-transitory computer readable storage medium(s). In some embodiments, a device comprises one or more processors configured to perform any of the steps described above with respect toFIG. 8.
FIG. 9is a flowchart illustrating a method900performed by a simulation system to enable display of virtual vehicles, according to some embodiments. Method900may, for example, be implemented by simulation system140described with respect toFIG. 1or simulation system540described with respect toFIG. 5. In some embodiments, one or more steps of method900may correspond to one or more steps performed by simulation system604as described with respect toFIG. 6.
In step902, the simulation system receives inputs controlling a virtual vehicle. For example, the inputs may be received from input controller536as described with respect toFIG. 5. In some embodiments, step902corresponds to step620ofFIG. 6.
In step904, the simulation system receives a position of a physical vehicle. In some embodiments, the position of the physical vehicle can be provided by the physical vehicle, e.g., as described with respect to step611ofFIG. 6.
In step906, the simulation system calculates a virtual world for simulating a race on a virtual racecourse between the virtual vehicle and the physical vehicle. In some embodiments, the virtual world can be a racing simulation stored in racing simulation550as described inFIG. 5. In some embodiments, step906corresponds to step621ofFIG. 6. In some embodiments, to calculate the virtual world, method900performs steps908-912.
In step908, the simulation system simulates the virtual vehicle on the virtual racecourse in the virtual world based on the inputs received in step902. In some embodiments, the inputs control kinematic characteristics that define how the virtual vehicle moves on the virtual racecourse in the virtual world.
In step910, the simulation system calculates a representation of the physical vehicle in the virtual world based on the position of the physical vehicle received in step904. In some embodiments, the simulation system can simulate the race between the physical vehicle and the virtual vehicle by adding the representation of the physical vehicle to the virtual world. In some embodiments, to calculate the representation of the physical vehicle, the simulation system transforms physical coordinates associated with the position of the physical vehicle to virtual coordinates within the virtual world.
In step912, the simulation system calculates a plurality of representations of objects in the virtual world. In some embodiments, a representation of an object (from the plurality of representations) corresponds to a physical object that is present in a physical racecourse being modeled in the virtual world. For example, the virtual racecourse in the virtual world may be simulated based on the physical racecourse, which may include physical objects such as trees, banners, pit stops, etc. In some embodiments, a representation of an object (from the plurality of representations) corresponds to a virtual object that is present on the virtual racecourse being simulated in the virtual world and that is not present on the physical racecourse. For example, the virtual object may include, without limitation, simulated obstacles, smoke, walls, explosions, or debris resulting from a collision between the virtual vehicle and the physical vehicle being simulated in the virtual world.
In some embodiments, the virtual world simulated by the simulation system can include the plurality of calculated representations of physical objects. In some embodiments, the simulation system can calculate the plurality of representations of physical objects by accessing a database of representations.
In step914, the simulation system receives a position of a point of view at the racecourse. In some embodiments, the position can be received from a physical vehicle as described with respect to step613ofFIG. 6. In these embodiments, the position represents a position of a point of view of an operator of the physical vehicle. In some embodiments, the position can be received from a viewer as described with respect to step632ofFIG. 6. In these embodiments, the position represents a position of a point of view selected by the viewer.
In step916, the simulation system calculates a portion of the virtual vehicle visible from the position of the point of view received in step914. In some embodiments, step916corresponds to steps622or621as described with respect toFIG. 6based on a source of the position of the point of view received in step914. In some embodiments, calculating the portion includes calculating a field of view from the virtual position of the point of view. In these embodiments, the calculated portion can be within the calculated field of view. In some embodiments, to calculate the portion of the virtual vehicle visible from the point of view, method900performs steps918-926.
In step918, the simulation system calculates a virtual position of the point of view within the virtual world based on the position of the point of view received in step914. In some embodiments, to calculate the virtual position, the simulation system transforms the physical coordinates of the position of the point of view to virtual coordinates within the virtual world.
In step920, the simulation system determines whether one or more representations of objects exist between the virtual position and the virtual vehicle in the virtual world. In some embodiments, the one or more representations of physical objects are selected from the plurality of representations of physical objects calculated in step912. As described above with respect to step912, the one or more representations of objects can include virtual representations of physical objects that are present on the physical racecourse. In some embodiments, the one or more representations of objects can include virtual objects that are simulated in the virtual world are not present on the physical racecourse. In step922, if one or more representations of the objects exist, method900proceeds to step926. Otherwise, method900proceeds to step924.
In step924, the simulation system extracts portions of the virtual vehicle that are unobscured, from the virtual position, by the representation of the physical vehicle in the virtual world.
In step926, the simulation system extracts portions of the virtual vehicle that are unobscured, from the virtual position, by the representation of the physical vehicle and the one or more representations of objects determined in step920.
In step928, the simulation system provides a portion of the virtual vehicle visible from the virtual position of the point of view to include one or more of the extracted portions. In some embodiments, the portion being output includes only the extracted portions. In some embodiments, as discussed above, the simulation system can calculate the field of view from the virtual position of the point of view. In these embodiments, the portion calculated by the simulation system can include: a non-excluded portion representing parts of the portion that are visible within the calculated field of view; and an excluded portion representing parts of the portion that are excluded (i.e., not visible) within the calculated field of view. In some embodiments where the field of view is calculated by the simulation system, the simulation system can calculate the portion to include only the non-excluded portion representing parts of the portion that are visible within the calculated field of view.
In some embodiments, the simulation system provides the portion of the virtual vehicle to the source originating the position of the point of view as described with respect to step914. For example, step928may correspond to steps623or625as described with respect toFIG. 6depending on a source of the point of view as described with respect to step914. In this example, if the position of the point of view is received from the physical vehicle, the simulation system may provide the portion of the virtual vehicle to the physical vehicle as described with respect to step623ofFIG. 6.
In some embodiments, the virtual world can be a racing simulation stored in racing simulation550as described inFIG. 5. In some embodiments, step906corresponds to step621ofFIG. 6. In some embodiments, to calculate the virtual world, method600performs steps908-912.
In some embodiments, a non-transitory computer readable storage medium stores one or more programs configured to be executed by one or more processors of an electronic device, the one or more programs including instructions for implementing any of the steps described above with respect toFIG. 9. In some embodiments, a non-transitory computer-readable storage medium comprises computer-readable instructions, which when executed by one or more processors, causes the one or more processors to perform steps described above with respect toFIG. 9. In some embodiments, a system comprises at least one of foregoing non-transitory computer readable storage mediums, and one or more processors configured to execute the instructions of the non-transitory computer readable storage medium(s). In some embodiments, a device comprises one or more processors configured to perform any of the steps described above with respect toFIG. 9.
FIG. 10is a flowchart illustrating a method1000to enable display of virtual vehicles, according to some embodiments. Method1000may, for example, be implemented by components within a physical vehicle such as physical vehicle101ofFIG. 1or physical vehicle510ofFIG. 5. In some embodiments, by simulating the virtual vehicle within a field of view of an operator of the physical vehicle on a racecourse, method1000enhances the realism of the interaction between the physical vehicle and the virtual vehicle as experienced by the operator against the virtual vehicle.
In step1002, a sensor in the physical vehicle (e.g., eye-position detector110) detects eyes measurements of an operator (e.g., vehicle operator114) of the physical system (e.g., physical vehicle101). In some embodiments, the sensor estimates the eyes measurements of the eyes based on detecting the operator's head or a device worn on the head of the operator (e.g., helmet116, visor over eyes117, or a head-mounted display (HMD)). For example, the sensor may estimate the eyes measurements of the operator's eyes based on detecting a position and/or an orientation of the device worn on the operator's head.
In step1004, a display system in the physical vehicle (e.g., rendering component107) identifies a position (e.g., position208A fromFIG. 2) of a physical object in the field of view of the operator. In some embodiments, the position corresponds to a location on a display (e.g., display220fromFIG. 2) in proximity to the physical system.
In step1006, the display system receives kinematics information of the virtual vehicle representing a competitor vehicle not physically on the racecourse. Further, the display system may receive information from a simulation system (e.g., simulation system140) related to virtual objects within racing simulation550or racing simulation141that are not physically present on the racecourse. In some embodiments, as described with respect toFIG. 1, the kinematics information may include GPS coordinates, spatial position, orientation, velocity, acceleration, or a combination thereof associated with the virtual vehicle. In some embodiments, the kinematics information can be received from a simulation system (e.g., simulation system140) that simulates a race between the physical vehicle and the virtual vehicle on a simulated racecourse.
In step1008, the display system generates a representation of the virtual vehicle based on the position of the physical object identified in step1004, the eyes measurements detected in step1002, and the kinematics information received in step1006. In some embodiment, the display system includes a simulation component (e.g., simulation component106) that generates the representation. Further, in embodiments where the display system receives information for other virtual objects as described in step1006, the display system similarly generates graphical, representations for these virtual objects. For example, virtual objects may include a wall, debris from a virtual car, or objects on a virtual racecourse being simulated in racing simulation550. In some embodiments, the virtual representation is generated based on a digital 3-D model of the virtual vehicle. In some embodiments, the digital 3-D model is stored in memory of the display system. The digital 3-D model may be received from, for example, the simulation system.
In step1010, the display system (e.g., rendering component107) displays the representation of the virtual vehicle on the display to align with the physical object represented by the identified position of step1004. In some embodiments, a rendering component (e.g., rendering component107ofFIG. 1) in the display system translates the representation into a set of graphical elements for displaying on the display. In some embodiments, the display system displays a series of representations of the virtual vehicle over a period of time by repeating one or more steps (e.g., steps1002-1010) of method1000to simulate a trajectory of the virtual vehicle on the racecourse in the field of view of the operator.
In some embodiments, the representation can be generated remotely by, for example, the simulation system. In these embodiments, the display system receives information related to the representation as generated by the simulation system. Further, the rendering component may translate this received information into a set of graphical elements for displaying on the display.
In some embodiments, a non-transitory computer readable storage medium stores one or more programs configured to be executed by one or more processors of an electronic device with a display, the one or more programs including instructions for implementing any of the steps described above with respect toFIG. 10. In some embodiments, a non-transitory computer-readable storage medium comprises computer-readable instructions, which when executed by one or more processors, causes the one or more processors to perform steps described above with respect toFIG. 10. In some embodiments, a system comprises at least one of foregoing non-transitory computer readable storage mediums, and one or more processors configured to execute the instructions of the non-transitory computer readable storage medium(s). In some embodiments, a device comprises one or more processors configured to perform any of the steps described above with respect toFIG. 10.
FIG. 11illustrates an example of a computer in accordance with one embodiment. Computer1100can be a component of a system for simulating virtual vehicles on a display according to the systems and methods described above, such as the devices in physical vehicle101or simulation system140described with respect toFIG. 1, or can include the entire system itself. In some embodiments, computer1100is configured to execute a method for enhancing a virtual race between a physical vehicle and a virtual vehicle, such as each of methods600,700,800,900, and1000ofFIGS. 6, 7, 8, 9, and 10, respectively.
Computer1100can be a host computer connected to a network. Computer1100can be a client computer or a server. As shown inFIG. 11, computer1100can be any suitable type of microprocessor-based device, such as a personal computer, workstation, server, videogame console, or handheld computing device, such as a phone or tablet. The computer can include, for example, one or more of processor1110, input device1120, output device1130, storage1140, and communication device1160. Input device1120and output device1130can generally correspond to those described above and can either be connectable or integrated with the computer.
Input device1120can be any suitable device that provides input, such as a touch screen or monitor, keyboard, mouse, or voice-recognition device. Output device1130can be any suitable device that provides output, such as a touch screen, monitor, printer, disk drive, or speaker.
Storage1140can be any suitable device that provides storage, such as an electrical, magnetic, or optical memory, including a RAM, cache, hard drive, CD-ROM drive, tape drive, or removable storage disk. Communication device1160can include any suitable device capable of transmitting and receiving signals over a network, such as a network interface chip or card. The components of the computer can be connected in any suitable manner, such as via a physical bus or wirelessly. Storage1140can be a non-transitory computer-readable storage medium comprising one or more programs, which, when executed by one or more processors, such as processor1110, cause the one or more processors to execute methods described herein, such as each of methods600,700,800,900, and1000ofFIGS. 6, 7, 8, 9, and 10, respectively.
Software1150, which can be stored in storage1140and executed by processor1110, can include, for example, the programming that embodies the functionality of the present disclosure (e.g., as embodied in the systems, computers, servers, and/or devices as described above). In some embodiments, software1150can be implemented and executed on a combination of servers such as application servers and database servers.
Software1150, or part thereof, can also be stored and/or transported within any computer-readable storage medium for use by or in connection with an instruction execution system, apparatus, or device, such as those described above, that can fetch and execute instructions associated with the software from the instruction execution system, apparatus, or device. In the context of this disclosure, a computer-readable storage medium can be any medium, such as storage1140, that can contain or store programming for use by or in connection with an instruction execution system, apparatus, or device.
Software1150can also be propagated within any transport medium for use by or in connection with an instruction execution system, apparatus, or device, such as those described above, that can fetch and execute instructions associated with the software from the instruction execution system, apparatus, or device. In the context of this disclosure, a transport medium can be any medium that can communicate, propagate, or transport programming for use by or in connection with an instruction execution system, apparatus, or device. The transport-readable medium can include, but is not limited to, an electronic, magnetic, optical, electromagnetic, or infrared wired or wireless propagation medium.
Computer1100may be connected to a network, which can be any suitable type of interconnected communication system. The network can implement any suitable communications protocol and can be secured by any suitable security protocol. The network can comprise network links of any suitable arrangement that can implement the transmission and reception of network signals, such as wireless network connections, T1 or T3 lines, cable networks, DSL, or telephone lines.
Computer1100can implement any operating system suitable for operating on the network. Software1150can be written in any suitable programming language, such as C, C++, Java, or Python. In various embodiments, application software embodying the functionality of the present disclosure can be deployed in different configurations, such as in a client/server arrangement or through a Web browser as a Web-based application or Web service, for example.
The foregoing description sets forth exemplary methods, parameters and the like. It should be recognized, however, that such description is not intended as a limitation on the scope of the present disclosure but is instead provided as a description of exemplary embodiments. The illustrative embodiments described above are not intended to be exhaustive or to limit the disclosure to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described to best explain the principles of the disclosed techniques and their practical applications. Others skilled in the art are thereby enabled to best utilize the techniques and various embodiments with various modifications as are suited to the particular use contemplated.
Although the disclosure and examples have been fully described with reference to the accompanying figures, it is to be noted that various changes and modifications will become apparent to those skilled in the art. Such changes and modifications are to be understood as being included within the scope of the disclosure and examples as defined by the claims. In the foregoing description of the disclosure and embodiments, reference is made to the accompanying drawings, in which are shown, by way of illustration, specific embodiments that can be practiced. It is to be understood that other embodiments and examples can be practiced, and changes can be made without departing from the scope of the present disclosure.
Although the foregoing description uses terms first, second, etc. to describe various elements, these elements should not be limited by the terms. These terms are only used to distinguish one element from another. For example, a first virtual vehicle could be termed a second virtual vehicle, and, similarly, a second virtual vehicle could be termed a first touch, without departing from the scope of the various described embodiments.
In addition, it is also to be understood that the singular forms “a,” “an,” and “the” used in the foregoing description are intended to include the plural forms as well, unless the context clearly indicates otherwise. It is also to be understood that the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It is further to be understood that the terms “includes,” “including,” “comprises,” and/or “comprising,” when used herein, specify the presence of stated features, integers, steps, operations, elements, components, and/or units but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, units, and/or groups thereof.
The term “if” may be construed to mean “when” or “upon” or “in response to determining” or “in response to detecting,” depending on the context. Similarly, the phrase “if it is determined” or “if [a stated condition or event] is detected” may be construed to mean “upon determining” or “in response to determining” or “upon detecting [the stated condition or event]” or “in response to detecting [the stated condition or event],” depending on the context.
In some embodiments, a non-transitory computer readable storage medium stores one or more programs configured to be executed by one or more processors of an electronic device with a display, the one or more programs including instructions for implementing any of the steps described or claimed herein. The present disclosure also relates to a device for performing the operations herein. This device may be specially constructed for the required purposes, or it may include a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a non-transitory, computer readable storage medium, such as, but not limited to, any type of disk, including floppy disks, optical disks, CD-ROMs, magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, application specific integrated circuits (ASICs), or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus. Furthermore, the computers referenced in this disclosure may include a single processor or may be architectures employing multiple processor designs for increased computing capability.
The methods, devices, and systems described herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may also be used with programs in accordance with the teachings herein, or it may prove convenient to construct a more specialized apparatus to perform the required method steps. The required structure for a variety of these systems will appear from the description below. In addition, the present disclosure is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the present disclosure as described herein.
Claims
- A method for displaying a virtual vehicle comprising: identifying respective positions of multiple points of view at a racecourse;providing the respective positions of the points of view at the racecourse to a simulation system;providing a position of a physical object at the racecourse to a simulation system;calculating, by the simulation system, a virtual world comprising the virtual vehicle;calculating, by the simulation system, respective virtual positions of the points of view within the virtual world based on the respective positions of the points of view at the racecourse;calculating, by the simulation system, a representation of the physical object in the virtual world between the respective virtual positions of the points of view and the virtual vehicle within the virtual world;calculating, by the simulation system, respective portions of the virtual vehicle within the virtual world that are visible from the corresponding virtual positions of the points of view, wherein the respective portions of the virtual vehicle within the virtual world that are visible from the corresponding virtual positions of the points of view comprise respective portions of the virtual vehicle that are unobscured, from the respective virtual position, by the representation of the physical object;outputting, by the simulation system, the respective portions of the virtual vehicle visible from the virtual positions of the points of view;providing, to a display system, the respective portions of the virtual vehicle visible from the virtual positions of the points of view;generating, at the display system, representations of the respective portions of the virtual vehicle visible from the virtual positions of the points of view;and displaying a series of representations of the virtual vehicle over a period of time to simulate a trajectory of the virtual vehicle on the racecourse, wherein the series of representations comprises the generated representations.
- The method of claim 1 , wherein the physical object is a physical vehicle at the racecourse.
- The method of claim 2 , wherein the respective positions of the point of view at the racecourse comprises a position of a point of view of an operator of the physical vehicle.
- The method of claim 1 , wherein the respective positions of the point of view at the racecourse comprises respective positions of points of view of audience members present at the racecourse and observing the physical object.
- The method of claim 1 , wherein the respective positions of the point of view at the racecourse comprise positions of cameras present at the racecourse and imaging the physical object.
- The method of claim 1 , wherein identifying the respective positions of the multiple points of view at the racecourse comprises: detecting one or more of a spatial position of a user's eyes, a gaze direction of the user's eyes, or a focus point of the user's eyes.
- The method of claim 1 , wherein calculating the virtual world comprises transforming physical coordinates of the physical object to virtual coordinates in the virtual world and wherein a virtual position of the physical object in the virtual world comprises the virtual coordinates.
- The method of claim 1 , wherein the physical object comprises separate physical objects for each point of view.
- The method of claim 1 , wherein calculating the respective portions of the virtual vehicle within the virtual world that are visible from the virtual positions of the points of view comprises calculating, by the simulation system, respective fields of view from the virtual position of the corresponding point of view and wherein providing, to the display system, the respective portions of the virtual vehicle visible from the virtual positions of the points of view comprises providing the portions of the virtual vehicle within each field of view.
- The method of claim 1 , further comprising: simulating, by the simulation system, an interaction between the virtual vehicle and the representation of the physical object in the virtual world, wherein the respective portions of the virtual vehicle visible from the virtual positions of the points of view is calculated based on the interaction.
- A system for displaying virtual vehicles, comprising: a first sensor detecting a position of a first point of view at a racecourse;a second sensor detecting a position of a second point of view at the racecourse;a simulation system: receiving the position of the first point of view from the first sensor and the position of the second point of view from the second sensor, receiving a position of a physical object at the racecourse, calculating a virtual world comprising the virtual vehicle and a representation of the physical object at a virtual position, calculating respective virtual positions of the first point of view and the second point of view within the virtual world based on the received positions from the first and second sensors, wherein the virtual position of the physical object is between the virtual positions of the first and second points of view and the virtual vehicle within the virtual world, calculating respective portions of the virtual vehicle within the virtual world that are visible from the virtual positions of the first and second points of view, wherein the portions of the virtual vehicle visible from the virtual positions of the first and second points of view comprise portions of the virtual vehicle that are unobscured, from the virtual position of the first and second points of view, by the representation of the physical object at the virtual position of the physical object, and outputting the portion of the virtual vehicle visible from the virtual positions of the first and second points of view;and a display system: receiving, from the simulation system, the portions of the virtual vehicle visible from the virtual position of the point of view, generating representations of the portions of the virtual vehicle visible from the virtual positions of the first and second points of view, and displaying a series of representations of the virtual vehicle over a period of time to simulate a trajectory of the virtual vehicle on the racecourse, wherein the series of representations comprises the generated representations.
- The system of claim 11 , wherein the physical object is a physical vehicle.
- The system of claim 12 , wherein the position of the first point of view at the racecourse comprises a position of a point of view of an operator of the physical vehicle at the racecourse.
- The system of claim 11 , wherein the position of the first point of view at the racecourse and the position of the second point of view at the racecourse comprise positions of points of view of respective audience members present at the racecourse and observing the physical object on the racecourse.
- The system of claim 11 , wherein the positions of the first and second points of view at the racecourse comprise positions of cameras present at the racecourse and imaging the physical object on the racecourse.
- The system of claim 11 , wherein calculating the virtual world comprises transforming physical object ordinates of the physical object to coordinates in the virtual world and wherein a virtual position of the physical object in the virtual world comprises the virtual coordinates.
- The system of claim 11 , wherein at least one of the first and second sensor detects the position of the respective point of view at the racecourse by detecting a spatial position of a user's eyes, a gaze direction of the user's eyes, or a focus point of the user's eyes.
- The system of claim 11 , wherein the physical object comprises a first physical object for the first point of view and a second physical object for the second point of view.
- The system of claim 11 , wherein calculating the portion of the virtual vehicle within the virtual world that is visible from the virtual position of the first point of view comprises calculating, by the simulation system, a field of view from the virtual position of the first point of view and wherein providing, to the display system, the portion of the virtual vehicle visible from the virtual position of the first point of view comprises providing, by the simulation system, the virtual portion of the virtual vehicle within the field of view.
- The system of claim 11 , wherein the simulation system simulates an interaction between the virtual vehicle and the representation of the physical object in the virtual world, and wherein the portion of the virtual vehicle visible from the virtual position of the first point of view is calculated based on the interaction.
Disclaimer: Data collected from the USPTO and may be malformed, incomplete, and/or otherwise inaccurate.

