U.S. Pat. No. 10,994,201

METHODS OF APPLYING VIRTUAL WORLD ELEMENTS INTO AUGMENTED REALITY

AssigneeWormhole Labs, Inc.

Issue DateMarch 21, 2019

Illustrative Figure

Abstract

In a method for providing an augmented reality interface for use by a first real-world human user and a second real-world human user, an augmented reality and virtual reality engine (AR-VR engine) produces a visual transformation of the first real-world human user (transformed human user 1), and a visual transformation of a real-world environment around the first real-world human user (transformed environment). The AR-VR engine also produces a virtualized reality world that includes images of transformed first real-world human user moving about, and interacting with, the transformed environment. The AR-VR engine further provides an electronic interface through which the second real-world human user can interact, in real-time, with at least one of the transformed first real-world human user and the transformed environment.

Description

DETAILED DESCRIPTION It should be noted that while the following description is drawn to a computer-based system, various alternative configurations are also deemed suitable and may employ various computing devices including servers, interfaces, systems, databases, engines, controllers, or other types of computing devices operating individually or collectively. One should appreciate the computing devices comprise a processor configured to execute software instructions stored on a tangible, non-transitory computer readable storage medium (e.g., hard drive, solid state drive, RAM, flash, ROM, etc.). The software instructions preferably configure the computing device to provide the roles, responsibilities, or other functionality as discussed below with respect to the disclose apparatus. In especially preferred embodiments, the various servers, systems, databases, or interfaces exchange data using standardized protocols or algorithms, possibly based on HTTP, HTTPS, AES, public-private key exchanges, web service APIs, known financial transaction protocols, or other electronic information exchanging methods. Data exchanges preferably are conducted over a packet-switched network, the Internet, LAN, WAN, VPN, or other type of packet switched network. One should appreciate that the disclosed techniques provide many advantageous technical effects including allowing users to access mixed reality environments. Mixed reality environments can include any combination of virtual and augmented reality environments, and can be connected to each other in any manner. The following discussion provides many example embodiments of the inventive subject matter. Although each embodiment represents a single combination of inventive elements, the inventive subject matter is considered to include all possible combinations of the disclosed elements. Thus if one embodiment comprises elements A, B, and C, and a second embodiment comprises elements B and D, then the inventive subject matter is also considered to include other remaining combinations of A, B, C, or D, even if not explicitly disclosed. As used herein, “real-world”, “real world”, and any similar terms means ...

DETAILED DESCRIPTION

It should be noted that while the following description is drawn to a computer-based system, various alternative configurations are also deemed suitable and may employ various computing devices including servers, interfaces, systems, databases, engines, controllers, or other types of computing devices operating individually or collectively.

One should appreciate the computing devices comprise a processor configured to execute software instructions stored on a tangible, non-transitory computer readable storage medium (e.g., hard drive, solid state drive, RAM, flash, ROM, etc.). The software instructions preferably configure the computing device to provide the roles, responsibilities, or other functionality as discussed below with respect to the disclose apparatus. In especially preferred embodiments, the various servers, systems, databases, or interfaces exchange data using standardized protocols or algorithms, possibly based on HTTP, HTTPS, AES, public-private key exchanges, web service APIs, known financial transaction protocols, or other electronic information exchanging methods. Data exchanges preferably are conducted over a packet-switched network, the Internet, LAN, WAN, VPN, or other type of packet switched network.

One should appreciate that the disclosed techniques provide many advantageous technical effects including allowing users to access mixed reality environments. Mixed reality environments can include any combination of virtual and augmented reality environments, and can be connected to each other in any manner.

The following discussion provides many example embodiments of the inventive subject matter. Although each embodiment represents a single combination of inventive elements, the inventive subject matter is considered to include all possible combinations of the disclosed elements. Thus if one embodiment comprises elements A, B, and C, and a second embodiment comprises elements B and D, then the inventive subject matter is also considered to include other remaining combinations of A, B, C, or D, even if not explicitly disclosed.

As used herein, “real-world”, “real world”, and any similar terms means anything having detectable mass in the physical world. Common examples include everyday objects, such as houses, chairs, and people. At one extreme, “real-world” includes light, as photons of light have detectable mass.

As used herein, “visual transformation”, “visually transformed”, and any similar terms means transformation of a real-world object based on physical features, movement, and/or functionality of the object in the real-world. For example, a visual transformation of a human user based on physical features could mean a change in the visual appearance of the human into an older or younger version of the person, a change in gender or race, a change in clothing or hairstyle, a change in facial expression, or even a change into a non-human or partially human creature. For example, a visual transformation of a human user based on movement could mean a change in gait of the user, from an ordinary walk to a plodding shuffle. Similarly, a visual transformation of an object based upon functionality could render a house as a castle, a dog as a dragon, or a door to a hallway as an entrance to a cave or a dream world.

As used herein, “environment”, “environments”, and any similar terms means the physical space or object about a person, other than clothing, wigs, and accessories. For example, a chair in which a person is sitting is considered to be part of the environment, even if the person is tied to the chair. Similarly, clothing on a hanger in a closet in which a person is standing is considered environment of the person, until the person puts on the clothing. As another example, a Wii Fit™ motion sensor in the hands of a person is considered part of the environment.

With respect to inclusion of space about a person, environment to a given viewer is limited by the context of the person as viewed by that viewer. If a person is viewed as standing or sitting in a room, then the inside of the room as viewed by the viewer is considered the environment. However, if the viewer views a person in the window of a house, from outside the house, the environment is whatever portions of the house and yard is viewed by the viewer.

As used herein, “auditory transformation” means speech or emitted sound changed into a different language, accent, or sound. For example, a dog's bark can be transformed into a dragon's roar. In another example, a human user's American accent can be changed into a British accent. In yet another example, a human user saying the word “roar” can be transformed into the roar of a real-world lion. In yet another example, the ringing of a small bell could be transformed into a giant gong.

As used herein, “interacting with”, “interaction with” and any similar terms means any action causing a perceptible change in the environment and/or a person. For example, a human user can interact with a real, virtual, or augmented reality object by changing movement, size dimensions, number, color, density, power, or any other quality of the object.

As used herein, “real-time”, “real time” and any similar terms means the actual time during which a process or event occurs, as well as a short time (less than ten seconds) required for computer processing, transmission latency, and intentional lags for someone to experience the process or event. Real-time having delays totaling no more than ten seconds are considered herein to be fuzzy real-time, real-time having delays totaling no more than five seconds are considered herein to be intermediate real-time, and real-time having delays totaling no more than one second are considered herein to be close real-time.

As used herein, “virtual objects”, “virtual things” and any similar terms mean objects perceivable by a viewer, but having no real-world mass. For example, a virtual ball could be rendered using an appropriate electronic display or other rendering technology, but without the rendering technology, the virtual ball could not be sensed with any of the five senses of touch, smell, sound, sight, and taste. Among other things, a virtual object can represent an ability or a power, including, for example, a force field set around a human user, a quantity of bullets, an energy level, a health level, or an ability to see in the dark.

FIG. 1is a functional block diagram illustrating a distributed data processing environment having an inventive VR-AR Engine.

The term “distributed” as used herein means a computer system that includes multiple, physically distinct devices configured to operate together as a single computer system.FIG. 1provides only an illustration of one implementation and does not imply any limitations with regard to the environments in which different embodiments may be implemented. Many modifications to the depicted environment may be made by those skilled in the art without departing from the scope of the invention as recited by the claims.

Distributed data processing environment100includes computing device104and server computer108, interconnected over network102.

It is contemplated that computing device104can be any programmable electronic computing device capable of communicating with various components and devices within distributed data processing environment100, via network102. It is further contemplated that computing device104can execute machine readable program instructions and communicate with any devices capable of communication wirelessly and/or through a wired connection. Computing device104includes an instance of user interface106.

User interface106provides a user interface to VR-AR engine110. Preferably, user interface106comprises a graphical user interface (GUI) or a web user interface (WUI) that can display one or more of text, documents, web browser windows, user option, application interfaces, and operational instructions. It is also contemplated that user interface can include information, such as, for example, graphics, texts, and sounds that a program presents to a user and the control sequences that allow a user to control a program.

In some embodiments, user interface106is mobile application software. Mobile application software, or an “app,” is a computer program designed to run on smart phones, tablet computers, and any other mobile devices.

User interface106can allow a user to register with and configure VR-AR engine110(discussed in more detail below) to enable a user to access a mixed reality space. It is contemplated that user interface106can allow a user to provide any information to VR-AR engine110.

Server computer108can be a standalone computing device, a management server, a web server, a mobile computing device, or any other computing system capable of receiving, sending, and processing data.

It is contemplated that server computer108can include a server computing system that utilizes multiple computers as a server system, such as, for example, a cloud computing system.

In other embodiments, server computer108can be a computer system utilizing clustered computers and components that act as a single pool of seamless resources when accessed within distributed data processing environment100.

Network102can include, for example, a telecommunications network, a local area network (LAN), a wide area network (WAN), such as the Internet, or a combination of the three, and can include wired, wireless, or fiber optic connections. Network102can include one or more wired and/or wireless networks that are capable of receiving and transmitting data, voice, and/or video signals, including multimedia signals that include voice, data, and video information. In general, network102can be any combination of connections and protocols that will support communications between computing device104, server computer108, and any other computing devices (not shown) within distributed data processing environment100.

Database112is a repository for data used by VR-AR engine110. In the depicted embodiment, VR-AR engine110resides on server computer108. However, database112can reside anywhere within a distributed data processing environment provided that VR-AR engine110has access to database112.

Data storage can be implemented with any type of data storage device capable of storing data and configuration files that can be accessed and utilized by server computer108. Data storage devices can include, but are not limited to, database servers, hard disk drives, flash memory, and any combination thereof.

FIG. 2is a schematic of a method of applying a virtual element from a virtual environment to a real world element.

VR-AR engine110identifies a virtual element from a virtual environment (step202).

A virtual environment includes a combination of virtual elements and augmented reality elements. Augmented reality elements are derived from physical spaces in the real world. In preferred embodiments, the virtual environment comprises both virtual elements and augmented reality elements presented in the virtual environment. For example, a virtual environment can be a three-dimensional representation of the Earth, where augmented reality elements are distributed within the three-dimensional representation of the Earth. In a more specific example, the augmented reality elements can be tied to specific individuals, and contain representations of the individuals' real world environments by any means known in the art, including 360° cameras, conventional video cameras, and stitched photos from cameras.

Virtual elements can include anything rendered in the virtual environment. In one embodiment, the virtual element is a render of a physical object. For example, a sword rendered in the virtual environment of a video game can be a virtual element. In another example, a building in the virtual environment of a video game can be a virtual element.

In another embodiment, the virtual element is a render of a non-physical element. For example, the virtual element can be based on at least one of light, sound, color, and movement. In a specific example, the virtual element can be the color of the light that hits one or more virtual objects. In another specific example, the virtual element can be the manner in which an object moves, such as a vehicle moving through a virtual landscape.

In yet another embodiment, the virtual element can be a combination of a render of a physical object and a render of a non-physical object. For example, the virtual element can be a sun in the sky in the virtual environment and the accompanying directional lighting and colors associated with the light emitted from the sun in the virtual environment. In another example, the virtual element can be a render of a wolf and the accompanying sounds of the wolf howling during a full moon.

VR-AR engine110identifies a real world element (step204).

It is contemplated that VR-AR engine110can identify real world objects in any manner known in the art. In a preferred embodiment, VR-AR engine110can use image recognition to identify a real world object. For example, VR-AR engine110can identify an oblong, brown object with the silhouette of a baseball bat and identify that the object is indeed a baseball bat.

In some embodiments, VR-AR engine110can identify real world objects with the assistance of a user. For example, a user can point a camera to a car and identify that object as a car. In other examples, the user can also input different properties associated with a real world object. Properties can include physical and non-physical properties. For example, a user can input that a foam object is flexible and made of a resilient material.

VR-AR engine110retrieves linking parameters for the real world object (step206).

Linking parameters can include any rules that determine whether the real world object is linked to a virtual object.

In some embodiments, linking parameters focus on the physical properties of an object. For example, linking parameters can set forth a rule that any physical object that has a particular level of rigidity cannot be linked with a weapon in the virtual environment in order to avoid users swinging hard objects around like swords in an augmented reality interface.

In another example, linking parameters can determine that a foam pool noodle exhibiting elastic properties and low weight can be associated with a sword in a virtual environment because the pool noodle poses little to no risk of injury if swung around.

In another embodiment, linking parameters focus on non-physical properties of an object. For example, linking parameters can set forth a rule that an object must fall within a particular color range (e.g., maroon to pink) for a similarly colored object in the virtual environment in order to be linked.

VR-AR engine110determines whether the real world object meets linking parameters (decision block208).

In preferred embodiments, VR-AR engine110determines whether the real world object meets linking parameters based on desirable physical properties. It is contemplated that the linking parameters are not necessarily linked based on similar physical properties.

In some embodiments, the linking parameters limit linking of virtual objects to real world objects that do not share similar physical properties. For example, the linking parameters can set forth a rule that real guns or any object reminiscent of a real gun (e.g., a toy replica) cannot be linked to virtual weapons.

In another example, the linking parameters can set forth a rule that bodies of water cannot be linked to solid ground in the virtual world in order to avoid causing individuals from running into pools based on an augmented reality overlay of solid land where there a body of water.

In other embodiments, the linking parameters cause VR-AR engine110to link virtual and physical objects based on similar physical properties. For example, the linking parameters can cause VR-AR engine110to link a family dog to a virtual animal companion based on similar sizing, coloration, and movement patterns between the real world and virtual object.

Responsive to determining that the real world object does not meet linking parameters, VR-AR engine110ends (“NO” branch, decision block208).

Responsive to determining that the real world object meets the linking parameters (“YES” branch, decision block208), VR-AR engine110renders the virtual element in the augmented reality interface (step210).

FIG. 3is a schematic of a method of tracking changes to virtual elements and applying the changes in an augmented reality interface.

VR-AR engine110monitors the virtual element (step302).

VR-AR engine110monitors any physical and/or non-physical changes to virtual objects. For example, VR-AR engine110can monitor a player's armor in-game and monitor if there are any upgrades to the armor in the game or any damage accumulated on the armor in the game. In another example, VR-AR engine110can monitor any color changes or new sound effects associated with a player's armor in the game.

In another example, VR-AR engine110can monitor a building in a video game for any changes to the building over the course of the game. In a more specific example, VR-AR engine110can identify an in-game building and monitor any additional structures added to the building over the course of the video game in which the building is located.

VR-AR engine110identifies a change in the virtual element in the virtual environment (step304).

VR-AR engine110renders the change in the augmented reality interface (step306).

VR-AR engine110renders the change in the augmented reality interface to overlay a virtual rendering of the virtual object over the real world object. For example, VR-AR engine110can render damage accumulated on body armor in a video game over a jacket of a user. In another example, VR-AR engine110can render a new building unlocked in a video game over a real world building of similar size. In yet another example, VR-AR engine110can render new special effects associated with an unlocked ability of a weapon in a video game to a corresponding real world object. In a more specific example, VR-AR engine110can add flames to an augmented reality render of a sword over a foam pool noodle if the player unlocks a fire-based skill for the weapon in-game.

FIG. 4depicts a block diagram of components of the server computer executing VR-AR engine110within the distributed data processing environment ofFIG. 1.FIG. 4is not limited to the depicted embodiment. Any modification known in the art can be made to the depicted embodiment.

In one embodiment, the computer includes processor(s)404, cache414, memory406, persistent storage408, communications unit410, input/output (I/O) interface(s)412, and communications fabric402.

Communications fabric402provides a communication medium between cache414, memory406, persistent storage408, communications unit410, and I/O interface412. Communications fabric402can include any means of moving data and/or control information between computer processors, system memory, peripheral devices, and any other hardware components.

Memory406and persistent storage408are computer readable storage media. As depicted, memory406can include any volatile or non-volatile computer storage media. For example, volatile memory can include dynamic random access memory and/or static random access memory. In another example, non-volatile memory can include hard disk drives, solid state drives, semiconductor storage devices, a read-only memory (ROM), an erasable programmable read-only memory (EPROM), a flash memory, and any other storage medium that does not require a constant source of power to retain data.

In one embodiment, memory406and persistent storage408are random access memory and a hard drive hardwired to computing device104, respectively. For example, computing device104can be a computer executing the program instructions of VR-AR engine110communicatively coupled to a solid state drive and DRAM.

In some embodiments, persistent storage408is removable. For example, persistent storage408can be a thumb drive or a card with embedded integrated circuits.

Communications unit410provides a medium for communicating with other data processing systems or devices, including data resources used by computing device104. For example, communications unit410can comprise multiple network interface cards. In another example, communications unit410can comprise physical and/or wireless communication links.

It is contemplated that VR-AR engine110, database112, and any other programs can be downloaded to persistent storage408using communications unit410.

In a preferred embodiment, communications unit410comprises a global positioning satellite (GPS) device, a cellular data network communications device, and short to intermediate distance communications device (e.g., Bluetooth®, near-field communications, etc.). It is contemplated that communications unit410allows computing device104to communicate with other computing devices104associated with other users.

Display418is contemplated to provide a mechanism to display information from VR-AR engine110through computing device104. In preferred embodiments, display418can have additional functionalities. For example, display418can be a pressure-based touch screen or a capacitive touch screen.

In yet other embodiments, display418can be any combination of sensory output devices, such as, for example, a speaker that communicates information to a user and/or a vibration/haptic feedback mechanism. For example, display418can be a combination of a touchscreen in the dashboard of a car, a voice command-based communication system, and a vibrating bracelet worn by a user to communicate information through a series of vibrations.

It is contemplated that display418does not need to be physically hardwired components and can, instead, be a collection of different devices that cooperatively communicate information to a user.

FIG. 5depicts a display with an augmented reality interface without virtual reality elements incorporated.

In the depicted embodiment, user502views a scene504through display418. Scene504represents an unedited scene. For example, scene504can be from raw footage or an unedited virtual environment. Scene504includes woman506, man508, and chair510.

FIG. 6Adepicts a first augmented perspective418A with an augmented reality interface incorporating a first set of user-specific virtual elements. For example, first user602A can have attributes including preferences for content set in the Middle Ages, and AR-VR engine110can accordingly render woman506as a princess, man508as a king, and chair510as a throne.

FIG. 6Bdepicts a third display with an augmented reality interface incorporating a second set of user-specific virtual elements.

FIG. 6Bdepicts a second augmented perspective418B with an augmented reality interface incorporating a second set of user-specific virtual elements. For example, second user602B can have attributes including preferences for animal shows, and AR-VR engine110can accordingly render woman506as a bear cub, man508as a parent bear, and chair510as a sawed-off tree stump.

FIG. 6Cdepicts a fourth display with an augmented reality interface incorporating a third set of user-specific virtual elements.

FIG. 6Cdepicts a third augmented perspective418C with an augmented reality interface incorporating a second set of user-specific virtual elements. For example, third user602C can have attributes including preferences for low-resolution games, and AR-VR engine110can accordingly render woman506as a small polygonal man, man508as a large polygonal man, and chair510as a chair made of toy blocks.

ThoughFIGS. 6A-6Cdepict stationary users in armchairs, it is contemplated that any user can move in a three dimension augmented reality enhanced space and view the same scene. For example, each user can see through their respective augmented reality perspectives from three respective, different positions in the original scene. For example, first user602A can be behind chair510, second user602B can be beside woman506, and third user602C can view from a top-down perspective.

It should be apparent to those skilled in the art that many more modifications besides those already described are possible without departing from the inventive concepts herein. The inventive subject matter, therefore, is not to be restricted except in the scope of the appended claims. Moreover, in interpreting both the specification and the claims, all terms should be interpreted in the broadest possible manner consistent with the context. In particular, the terms “comprises” and “comprising” should be interpreted as referring to elements, components, or steps in a non-exclusive manner, indicating that the referenced elements, components, or steps may be present, or utilized, or combined with other elements, components, or steps that are not expressly referenced. Where the specification claims refers to at least one of something selected from the group consisting of A, B, C . . . and N, the text should be interpreted as requiring only one element from the group, not A plus N, or B plus N, etc.

Claims

  1. A method of using one or more processors to provide an augmented reality interface for use by a first human user and a second human user, comprising: producing a visual transformation of the first human user, thereby creating a first transformed human user, and a transformed visual environment of a real-world environment around the first human user;producing a virtualized reality world that includes images of the first transformed human user moving about, and interacting with, the transformed visual environment;providing an electronic interface through which the second human user can interact, in real-time, with at least one of the first transformed human user and the transformed visual environment;identifying a real-world object using a sensor, where the sensor is physically separated from the real-world object;determining whether the real-world object meets one or more linking parameters based on at least one property associated with the real-world object, wherein the linking parameters include one or more restrictions associated with the rendering of one or more real-world objects in augmented reality, and wherein the one or more restrictions limit the linking of the real-world object with a virtual object based on a safety rule;and producing a transformed object disposed within the real-world environment, and allowing the second human user to use the interface to interact with the transformed object.
  1. The method of claim 1 , further comprising disposing a virtual object disposed within the transformed environment, and allowing at least one of the first human user and the second human user to use the interface to interact with the virtual object.
  2. The method of claim 2 , wherein the virtual object represents a monetary value.
  3. The method of claim 2 , wherein the virtual object represents an ability or power.
  4. The method of claim 2 , wherein the virtual object represents a resource level.
  5. The method of claim 1 , further comprising producing a visual transformation of the second human user, thereby creating a second transformed human user, and allowing the second transformed human user to use the interface and interact with at least one of the first transformed human user and the transformed visual environment.
  6. The method of claim 6 , further comprising producing a visual transformation of a third human user, thereby creating a third transformed human user, and allowing the second transformed human user to use the interface to interact with the third transformed human user.
  7. The method of claim 1 , wherein the step of producing a visual transformation of the first human user comprises the first human user controlling at least a portion of an appearance of the first transformed human user to one or more third parties.
  8. The method of claim 1 , wherein the step of producing a visual transformation of the real-world environment comprises the first human user controlling at least a portion of an appearance of the transformed environment.
  9. The method of claim 1 , wherein the step of producing a visual transformation of the first human user comprises the second human user controlling at least a portion of an appearance of the first human user.
  10. The method of claim 1 , wherein the step of producing a visual transformation of the real-world environment comprises the second human user controlling at least a portion of an appearance of the transformed environment.
  11. The method of claim 1 , further comprising storing at least a 30 second instance of the virtualized reality world, thereby creating a means for time-shifting the virtualized reality world, and allowing the second human user to use the interface to interact with a time-shifted virtualized reality world.
  12. The method of claim 1 , further comprising producing an auditory transformation of the first human user, and allowing the second human user to interact in real-time with an auditorily transformed first human user.
  13. The method of claim 1 , wherein appearance of the first transformed human user within the interface tracks facial expressions of the first human.
  14. The method of claim 1 , wherein appearance of the first transformed human user within the interface tracks limb movements of the first human.
  15. The method of claim 1 , wherein the first human user is a first participant and the second human user is a second participant in a multiplayer game.

Disclaimer: Data collected from the USPTO and may be malformed, incomplete, and/or otherwise inaccurate.