U.S. Pat. No. 10,672,171
SYSTEM AND METHOD FOR DYNAMIC CONTENT GENERATION IN A VIRTUAL SPACE
AssigneeLAMPLIGHT FOREST HOLDINGS PTY LTD
Issue DateMarch 7, 2017
Illustrative Figure
Abstract
A computerized system and method provides for the dynamic generation of content in a virtual space. The method and system includes selecting at least one virtual object that provides content in the virtual space and defining interaction parameters for the virtual object, such as movement along an axis in the virtual space. The method and system includes instantiating an avatar, or selecting another object, in the virtual space that moves along the selected axis and pairing a location of the avatar, or the other object to the interaction parameters for the virtual object. Therein, the method and system includes modifying an output generated by the at least virtual object based on changes in position of the avatar, or other object, within the virtual space, the modifying of the output determined based on the interaction parameters.
Description
A better understanding of the disclosed technology will be obtained from the following detailed description of the preferred embodiments taken in conjunction with the drawings and the attached claims. DETAILED DESCRIPTION Embodiments of the disclosed technology providing for the dynamic generation of content in a virtual space. The content generation includes audio content and/or video content, whereby the content generation uses a relationship between an avatar and a defined object in the virtual space, or can be based on a relationship between multiple objects in the virtual space. FIG. 1illustrates a system100allowing for dynamic content generation in the virtual space102with real world104output results. The virtual space102includes an input/output interface106and control processor108. The virtual space102additionally includes an object110with a library112, as well as an avatar114also having a library116. In the real world environment104, an input/output interface120engages a processor122, with user controller124inputs and audio output126and video output128. In the system100, the virtual space is any suitable computer processing environment or gaming space as recognized by one skilled in the art. The system100includes additional processing elements not expressly illustrated, but as recognized by those skilled in the art, provide for the generation of the virtual computing space allowing for user engagement, control and interactivity. Such space may be housed in one or more networked computer processing environments with user engagement across a networked connection. One such example of a virtual space may be an online interactive gaming world, as commonly available using current technology means. The i/o device106may be a computer processing module allowing for communication into the virtual space102. The controls108represent a processing module or computer processing device that manages user input control operations, translates and engages the user controls into the virtual space102. By way of example, the controls108allow for controlling an avatar114. The object110may be any suitable object within the ...
A better understanding of the disclosed technology will be obtained from the following detailed description of the preferred embodiments taken in conjunction with the drawings and the attached claims.
DETAILED DESCRIPTION
Embodiments of the disclosed technology providing for the dynamic generation of content in a virtual space. The content generation includes audio content and/or video content, whereby the content generation uses a relationship between an avatar and a defined object in the virtual space, or can be based on a relationship between multiple objects in the virtual space.
FIG. 1illustrates a system100allowing for dynamic content generation in the virtual space102with real world104output results. The virtual space102includes an input/output interface106and control processor108. The virtual space102additionally includes an object110with a library112, as well as an avatar114also having a library116. In the real world environment104, an input/output interface120engages a processor122, with user controller124inputs and audio output126and video output128.
In the system100, the virtual space is any suitable computer processing environment or gaming space as recognized by one skilled in the art. The system100includes additional processing elements not expressly illustrated, but as recognized by those skilled in the art, provide for the generation of the virtual computing space allowing for user engagement, control and interactivity. Such space may be housed in one or more networked computer processing environments with user engagement across a networked connection. One such example of a virtual space may be an online interactive gaming world, as commonly available using current technology means.
The i/o device106may be a computer processing module allowing for communication into the virtual space102. The controls108represent a processing module or computer processing device that manages user input control operations, translates and engages the user controls into the virtual space102. By way of example, the controls108allow for controlling an avatar114.
The object110may be any suitable object within the virtual environment. As described in further detail below, the object110relates to the generation of content, such as an audio or video content, or effects that modify audio or video content. It is further noted that while the space102ofFIG. 1includes a single object110, any number of objects may be included and the single object110is for illustration purposes only.
With the object110is a library112. The library112may be one or more storage devices having the objects stored therein. For instance, if the object is an audio file, the library includes the audio content of the file. In another instance, if the object is a microphone input feed, the library includes the programming instructions for receipt of the microphone feed.
The virtual space102further includes the avatar114, along with a library116including data relating to the avatar. As used herein, the avatar generally refers to three-dimensional game-character humanoids subject to control functions, e.g. run, walk, jump, etc. It is recognized that the avatar is not expressly restricted to game-character humanoids, but can also represent any suitable object or element in the virtual space where the end user is granted the ability to control or otherwise manipulate. In general, the term avatar as used herein generally refers to any virtual object capable of being controlled by an end user.
Where the virtual space102operates within a computing environment, the real world104represents physical user interactive devices. The elements may be found within a gaming system, a computing system, self-contained mobile processing device, or any other suitable system used for virtual space interactivity. The i/o device120, similar to the i/o device106, allows for interaction between the processor122and the virtual space102, using known i/o interactivity technology.
The processor122may be any suitable processing device or devices providing for functionality as described herein. The processing device122, in one embodiment, includes executable instructions stored in one or more storage device (not expressly illustrated) whereby the processing device performs processing operations in response thereto.
The user controller124may be any suitable type of controller, including but not limited to a keyboard, mouse, microphone, touch screen, and/or control pad. The controller124receives user input, such input processed and translated by the processor122for the virtual space operations108.
The audio output126may be any suitable device operative to provide audio output. By way of example, the audio output may include a speaker, speaker systems, and/or headphones. The video output128may be any suitable display device operative to provide the video output generated from the virtual space102.
In the processing environment100ofFIG. 1, the method and system provides for dynamic generation of audio and video content via a user engaging the controller124with feedback via the output device(s)126,128by operation of an avatar114within the virtual space102. As described in further detail below, the dynamic content is generated by associating the avatar114with the object110, movement of the avatar adjusts the object using interaction parameters associated with the object.
In a further embodiment, the relationship for dynamic content generation is not restricted to the proximity relationship between an object and an avatar. The instantiation of multiple objects allows for the dynamic content generation based on the relationship of objects. For example, where a first object is an audio source, the second object may be a modification of that audio source. The relationship can be defined between the two objects such that the first object output is modified based on the proximity of the second object. Where the object positions/proximity is fixed, the output is then set based on the static proximity value. But it is recognized that objects are moveable within the virtual space, such as one embodiment having an avatar move the object, so the proximity between objects therefore changes and the output is dynamically modified.
FIG. 2illustrates a flowchart of the steps of one embodiment of a method for dynamic content generation. The flowchart, in one embodiment, may be performed within the operating system100ofFIG. 1, including processing steps performed in the virtual space102, as well as output signal(s) for the audio output126and/or video output128.
In the embodiment ofFIG. 1, a first step, step140, is the selecting of a virtual object, the virtual object providing content in a virtual space. In one embodiment, the virtual object may be an audio content object or a video content object. The content object may include a source, e.g. an audio object may be an audio source. The content object may also include variations or modifications of the content, such as for example audio effects for an audio source.
Sample audio content objects within the virtual space may include, but are not limited to: a sample player playing one or more audio samples; a synthesizer providing audio content, microphone or other audio input line. Sample audio content objects relating to audio effects may include, but are not limited to: digital signal processing (DSP) effects, audio bus and a recording module. The DSP effects may include any suitable effect adjusting the audio content, such as by way of example delay or reverb. The DSP effects include expressive parameters adjustable and made accessible to users for audio manipulation.
Sample video content objects include visual or video sources such as lighting (global or localized lighting), motion and form such as shape emitters or form dynamic modules. Sample video contents objects include visual effects, such as color but not limited to assignment operations or camera filter operations, e.g. contrast, saturation, etc.
Audio/video objects may further include narrative sources providing content within the virtual space. The narrative sources include, for example, objects of verbs, inventory objects and text display. The text display objects may include, for example, synced input and display of text, conversation object (output and navigation), environmental and persistent text.
Audio/video objects may further include narrative inputs providing content within the virtual space. The objects may include verb-object interaction response authoring, conversation object authoring and navigation, synched input and display of text, and hyperlink navigation.
The objects may further include and/or relate to control parameters for operations in the virtual space. The control parameters may be further grouped based on player interaction parameters and automated controls. Player interactions may include control in the virtual space via the avatars, control via external hardware and control via internal hardware.
The classes of player interactions are further delineated relative to the virtual space operations, including for example objects relating to game-area navigation, camera view, cursor movement, avatar control/selection.
The above listing of objects represents sample classes and general classifications, and is not an exclusive or comprehensive list. Further objects, as recognized by one skilled in the art, are within the scope of the presently described virtual object, where the virtual object relates to interactivity in the virtual space.
In the flowchart ofFIG. 2, a next step, step142, is defining an interaction parameter for the virtual object, the interaction parameter being adjustable to movement along at least one axis in the virtual space. The interaction parameters are relative to the object itself. By way of example, if the object is a lighting effect, the interaction parameter may be adjusting the brightness of the lighting effect. In another example, if the object is an audio sample, the interaction parameter may be an adjustment of the audio sample using the basic example of volume. Where the interaction parameter is tied to an axis, the example of volume may be movement up the axis increases volume and movement down the axis decreases volume.
In the virtual space, step144is instantiating an avatar, the avatar operative for movement along the at least one axis in the virtual space. The instantiating of an avatar may be performed using any number of suitable techniques as known in the art. For example, one technique may be being within proximity of an avatar and selecting one or more designated buttons. In another example, one technique may be a designated avatar switching button allowing a user to jump from one avatar to another. In another example, an interface allows for a user to actively select from one or more available avatars and thus place the avatars into the virtual space. Regardless of the specific selection technique, the selection operation itself ties the user input controls to the selected avatar, allowing the user to move the avatar in the virtual space.
Based on the virtual environment, if it is a two-dimensional space or a three dimensional space, the corresponding number of axis are available for movement. For example, in a three dimensional, movement is available along an x-axis, a y-axis and a z-axis.
In the flow diagram ofFIG. 2, step146is pairing the location of the avatar to the interaction parameter(s) for the virtual object. The pairing operation ties the movement of the selected avatar for the corresponding object, where the movement thus allows for changing an output of the virtual object.
Step148is receiving user input commands for changing the position of the avatar within the virtual space. Step148may be performed using known game engine or virtual space engine technology for navigating the user in the virtual space.
Step150is modifying an output generated by the virtual object based on changes in position of the avatar based on the interaction parameters. As noted inFIG. 1, where the user input via the user controls124operates in the virtual space102, the output of the virtual space interaction is made available to the video output128and audio output126. Thus, as the avatar is tied to the virtual object with interaction parameters defined by the avatar movement, the output generated by the virtual object is modified relative to the changes in the avatar positions. For example, if the virtual object is an audio clip and the interaction parameter is the frequency of the audio clip, movement in the selected axis by the avatar changes the frequency. Continuing in this example, if the axis is an x-axis movement, the frequency may be increased as the avatar moves up the x-axis and decreased as the avatar moves down the x-axis. In general, the movement along one or more axis is based on proximity between the avatar and the object (or multiple objects), so the movement along the one or more axis changes the proximity values and thus provides for modification of the output.
In the methodology ofFIG. 1, the final step is step152, providing the modified output to an output device external to the virtual space. In the sample embodiment ofFIG. 1, this may include the audio output126and/or video output128. Using the above example of adjusting the frequency of an audio sample, the modified output is thus provided to the audio output126, with the avatar movement itself visible via the video output128.
Where the methodology ofFIG. 2provides for a single avatar and a single virtual object, the present method and system additionally operates in a larger virtual space having further functionality with any number of virtual objects and any number of avatars.
FIG. 3illustrates an embodiment having a virtual space102with six exemplary audio and video objects and a plurality of avatars. The virtual space includes a first audio source160, second audio source162, third audio source164and fourth audio source166. Further in this virtual space are additional objects complimentary to the audio sources, here having examples of a pulse engine168and a note sequencer170. For example, the pulse engine168is engaged to the audio source162such that the audio source162may be a pulse triggered audio sample. For example, the note sequencer170is engaged to the audio source166, where audio source166may be a synthesizer or other music-generation source.
The virtual space102further includes a visual object172, here a camera filter for adjusting one or more camera filtering values. The virtual space includes a first avatar174and second avatar176.
In the user space104, a first player180operates controls with virtual world outputs provided to the output device182, as well as a second player184. In the system ofFIG. 3, it is recognized that numerous processing elements are omitted for clarity purposes only, but one skilled in the art recognizes the operational functionality of player180engaging the virtual world via any suitable computing means. The output device182includes processing operations for receiving the output signals from the virtual space102and generating the output, including for example a video display or screen and speakers.
FIG. 3provides for illustrating the framework for dynamic virtual space setup with common object connections. As described in further detail below, the objects160-172in the virtual space102can be created, destroyed, moved and routed to other objects or players.
As illustrated, the connecting arrows between the avatars174and176and the objects160-172illustrate proximity relationships in the virtual space, wherein the player180and/or player184can manually create the connections, as well as control movements in the virtual space. The objects can be instantiated by the player180during gameplay. Moreover, the dynamic manner of the virtual space allows for the player180to change engagement of avatars, such as switching from avatar174to avatar176, if the second player184is not actively engaged. Here, the avatars themselves are engaged with the objects, so changing avatars can change the object engagement.
Using the methodology ofFIG. 2, in the exemplary embodiment ofFIG. 3, as the player180operates the avatar174in the virtual space102, the movement of the avatar174modifies the virtual objects160,162. In this embodiment, the audio and visual information received by the avatar174is combined at the audio visual output device182.
Further variations and embodiments can operate within the general context ofFIG. 3, including single and multiplayer engagement in the virtual space. Embodiments include local multiplayer as well as distance multiplayer embodiments.
Single player games or virtual environments include at least one avatar. The avatar movement is constrained in the virtual space by a basic physics simulation that determines gravity, smooth movement thorough the dimensional environment and object collisions.
Avatar control is an integral part of the virtual environment. A current active avatar (CAA) represents the current avatar controlled by the player, such as the avatar174controlled by player180. In the virtual environment, players can create new avatars at any time, as well as destroy avatars. One setting provides that when a new avatar is created, that avatar becomes the CAA. Additionally, with multiple avatars, the player may manually select the change of the designated CAA.
In the example of multiplayer,FIG. 3includes the second player184, engaging avatar176. In a local multiplayer, players180and184share the output182. In local multiplayer, controls operate similar to the single player embodiment, but further including rules precluding a player switch to control an avatar already being controlled by the other player. Similarly, one player may lock an avatar, thus when the avatar is not being controlled, other players are prevented from engaging the avatar.
Another embodiment is a remote multiplayer environment having multiple avatars. Using againFIG. 3, the difference being player180and player184are geographically separate and would have individual audio visual output elements. Therein, the players180and184do not share the output182ofFIG. 3, but would each have their own output device.
Moreover, it is noted that the multiplayer environment is illustrated with two players, but the present method and system operates using any number of multiplayers. For example, in a large virtual space, there can be tens or even hundreds of multiplayers, therefore the present method and system provides general interactive guidelines applicable to varying scales of virtual environments and is not expressly limited to single or double player environments.
When an avatar is assigned or matched to an audio source, the audio source can be in either a locked or unlocked mode. In a locked-listener mode, the audio object may start in an off position as the default position. When the avatar is then locked to the source, the player may then engage and start the audio source. In another embodiment, a player may select an audio source on, the turning on may then activate the locking of the avatar and source. Either way, once the source and avatar are locked, the position of the avatar relative to the audio source determines one or more output parameters, such as by way of example volume and panning.
In one embodiment, the audio source may then be turned off, but the connection with the avatar remains connected. Various other embodiments provide for toggling between avatars and objects, as described in further detail below regarding multiplayer and multi-avatar environments. At any time, the player can manually replace the current listener, regardless of whether the audio source is playing or stopped. If playing, this does not interrupt audio playback as it smoothly interpolates from previous to new listener position and orientates the output modifications.
In one embodiment, the listener and audio source do not need to be locked. In an unlocked-listener mode, the audio source can always be dynamically re-assigned to the current active avatar in the virtual space. Thus, if the player switches avatars, the connection from the audio source is updated automatically to the selected new current active avatar.
Where there are multiple avatars and multiple sources, any single avatar can be the simultaneous listener of any number of audio sources. Using the exemplaryFIG. 3, the virtual space102includes 4 audio sources160,162,164and166with avatars174and176. The dashed lines from the avatars174and176to the output182represent audio signals.
With multiple audio sources, the player can hear a mix of the connected or locked audio sources. Thus, in theFIG. 3embodiment, if the player180engages avatar174, the output182includes audio sources160and162, where the output of the sources160and162are adjusted position of the avatar to the source. Similarly, if the player switches to the second avatar176, the output182then switches to provide audio outputs164and166. The above presumes that the audio sources160,162,164and/or166are turned on and active, as it is understood that if an audio source is turned off or otherwise inactive, no output is generated.
The virtual objects additionally include non-audio sources.FIG. 3illustrates one example of a video object, a camera filter172providing video output adjustment for the second avatar176(as it is locked to avatar176).FIG. 4illustrates a sample of available objects. These exemplary objects affect audio and/or video output based on the assigned interaction parameters relative to the proximity or axis-defined distance between the avatar and the object.
FIG. 4does not provide an exhaustive or exclusive list, but rather a representative listing of available objects, where further objects may be objects as recognized by one skilled in the art. The proximity of an avatar to a connected audio bus190controls volume of the audio, as well as panning via the avatar's rotation. The proximity of an avatar to a connected a camera filter192controls the amount of filter applied, e.g. wet/dry, to a scene's main camera, or can control specific parameters of individual camera filters. The proximity of an avatar to a connected global lighting object194controls the amount of light-settings applied to an associated object. Since any scene will have its own default light-setting, this connection performs a continuous linear interpolation between the object's setting and the default light settings of the scene. The proximity of an avatar or object to a localized light object196controls the intensity and/or radius of a light source.
A shape emitter198has no default state, but can be used to control a wide variety of object emission parameters based on proximity. The proximity of an avatar or object to a color assign object200controls the amount of color to apply to any object it is coloring. This connection performs continuous linear interpolation between the object's output color and the default color of any connected objects. Dials and faders202adjust the output value of the dial/fader based on avatar or object proximity. This applies to standalone dials/faders as well as to those that are attached to most kinds of dynamic objects. Thus, the value generated by the dial/fader influences any associated parameters.
The proximity of an avatar to a note sequencer object204determines the note sequencer's volume, where even though the note sequencer does not directly output audio, the user can control the master volume level of the notes that it sends to an audio source. The proximity of an avatar to sequence nodes206controls the volume of any sound triggered in a sequence based on the listener distance to a first node in a sequence. The proximity of an avatar to switches208controls a binary on or off determination based on proximity of the avatar. Form dynamics210have no default state, but the proximity to an avatar can be used to control the speed and/or range of various motions: rotation, scaling, position-oscillation and orbit by way of example.
As noted above, the position of the avatar relative to the object defines output modification. Proximity allows for position determination relative to the object. As noted, proximity can be from an avatar, but further embodiments also provide for dynamic output generation controlled by proximity between multiple objects.
FIGS. 5A-5Cprovide illustrated examples of proximity field220between the avatar174and an audio source160. In this example, the proximity field being circular, the radius of the field determines the effective range and intensity of the parameters of the object. The proximity fields are centered on the object itself, such that if and when an object is moved within the virtual space, the proximity field also moves.
Stated in different terms, the proximity determines a value applied to modification of the object. In the example of an audio bus associated with an audio source, the proximity determines the volume level of the output to the avatar. For example, using the audio source160, the proximity field220around the audio source160determines the radius within which the sound is audible. The sound is loudest when the avatar is at the center.
InFIG. 5A, the avatar174is about half way between the center and the outer edge of the field220, thus the volume of the audio source160should be about half volume. By contrast,FIG. 5B, the avatar174is outside the proximity field220and thus the audio source160is inaudible to the avatar174. Part of player control, the player may modify the field size.FIG. 5Cillustrates the same proximity of avatar174to audio source160asFIG. 5B, but an increase in the size of the field220. Thus, the audio source160is now audible to the avatar174.
While illustrated inFIGS. 5A-5Cwith audio source, the object may apply to video sources. In one example, the object may be a localized light object, where the light is at full intensity when the avatar is next to the object and the light is not visible, e.g. intensity set to zero, when the avatar is outside the proximity field. In the above example of switches, the proximity field does not determine a range, but rather if the avatar is within the proximity field, the switch can be on and once outside the field the switch is off (or vice versa).
Where many objects use the proximity field and radius as a means to adjust intensity of one or more output parameters, one exception is digital signal processing (DSP) effects. With DSP effects, the audio source itself acts as the listener, unlike other objects where the avatar is the listener. The radius of a proximity field determines the effective range and intensity of the effect for any connected audio source within its radius. Similar to theFIG. 5proximity field, the effect of the DSP effect is greatest when the audio source is at the center of the proximity field, and inaudible when outside the field.
FIG. 6illustrates a system having multiple DSP effects, a first DSP effect230and a second DSP effect232. A first audio source234includes a first effect slot236, and a second audio source238includes a first effect slot240and second effect slot242. Additionally, each DSP effect includes proximity fields244and246, respectively.
As shown inFIG. 6, DSP effects can be connected to multiple audio sources at once. Each audio source can have a different ordering of effects, players can re-order these effects at any time. InFIG. 6, first audio source234has the first DSP effect230in its chain, the second audio source238has the first DSP effect230in its chain, processed before the second DSP effect232. Therefore, as the audio sources234,238are moved within the virtual space, the proximity to the DSP effects230and232change, changing the effectiveness and intensity of the effect for the audio source. Whereas, it is further noted that for output generation to the player, not illustrated inFIG. 6, the relationship of the audio source to the avatar thus further defines the audio output.
Dials and faders are another example of audio or video effects that are not solely determined by avatar to object proximity. Dials and faders can be assigned to measure their proximity to any scenery object, as well as to an avatar proximity.
FIGS. 7A-7Billustrate an example of a fader250set in proximity to the virtual world scenery object of a wall254. In this case, the wall254is an inanimate object thus not subject to movement.FIG. 7Aillustrates a sample starting position with a given proximity, withFIG. 7Billustrating movement of the dial250away from the wall254. In virtual gameplay, the player may control the dial250either directly, via an avatar or other means. As the dial250is moved away from the wall254(FIG. 7B), the dial's value is dynamically increased in proportion to the distance away from the wall254. Relative to the output received by the player, in the virtual space the dial is connected to an audio source and the effect of the dial therein has a change in the output of the audio source.
FIGS. 8A-8Billustrate the similar relationship but with a fader256being adjusted based on a proximity to the wall254. As the fader is moved back from its position inFIG. 8Ato its position inFIG. 8B, the value of the fader256is thus dynamically increased in proportion to its distance to the object. Similar to the dial250ofFIGS. 7A-7B, the fader256may be controlled by the player operating in the virtual space.
In addition to the defined proximity of an object to an avatar for output generation, the present method and system further facilitates the movement of objects within the virtual space. For example, an avatar is able to pick up and carry objects in the virtual space.FIG. 9illustrates an example of the avatar174picking up (260) the DSP effect262. The DSP effect262is coupled to the audio source160such that the DSP effect262modifies the audio source160output based on the designated DSP effect, the intensity of the modification based on the proximity of the DSP effect262to the audio source260.
In theFIG. 9illustration, the avatar174holding the DSP effect262thus moves in the virtual space, the movement of the avatar174illustrated by the dashed arrows. Proximate to the audio source160, the avatar174is moving closer to the audio source160, which thereby increases the effect level on the audio source.
In the exemplary movement, the avatar174cross-crosses across the virtual space, illustrated by points 1-5. The DSP effect262is picked up at point 1, moved through space to points 2, 3 and 4. In this exemplary embodiment, the avatar174drops the DSP effect at point 4, continuing its movement to point 5.
In this embodiment, the second avatar176is an active listener, hearing the audio source160modified by the DSP effect262. The DSP effect modification increasing in intensity by the avatar movements at points 1, 2, 3 and 4, remaining constant when the avatar moves from points 4 to 5.
Varying embodiments can include, for example, a second player controlling the audio source160, so movement of the audio source160further effects modification of the DSP effect262. Whereby, the DSP effect262is a virtual object that can be picked up and moved in the virtual space. Similarly, the audio source is also a virtual object capable of being moved in the virtual space. As the dynamic content generation is based on the proximity between objects and/or proximity with avatars, the generated output is therein modified based on the movement in the virtual space.
In addition to avatar-based movement of objects, the virtual space further applies defined physical constraints on motion. Physics-based parameters controls rely on a special relationship between two objects, such as an anchor and motion. The anchor and motion is a single object type consisting of two interdependent parts. The object-type is a physics-based parameter.
FIGS. 10A-10Cillustrate an example of object movement. The virtual space includes motion280and anchor282. The motion280is further noted by fader representing axis-specific movement, here faders284,286and288for the x, y and z axis respectively.
The anchor282is set in place in position where the player creates it. The anchor282may additionally be moved like other objects, including picked up by an avatar and being carried around. The player may use an anchor282freeze the motion280object at any time, or to magnetize it, i.e. draw the motion280towards the anchor282. The motion280component has a full gravity simulation. The player can dynamically control its physical parameters, such as bounciness and angular and linear drag. The player can dynamically apply forces, such as via an avatar or cursor, to propel the object through 3D virtual space, e.g. throwing, rolling, dropping, etc.
The illustration ofFIGS. 10A-10Cprovide for motion280within the virtual space relative to the anchor282. In that motion, the faders284,286and288automatically adjust. Thus inFIG. 10A, the motion280is applied, such as by an avatar or a cursor, causing the motion to drop at an angle.
FIG. 10Billustrates the motion280bouncing off the floor of the virtual space and climbing up in height. As visible in the faders284,286and288, the x-axis has increased as the motion280moves away from the anchor282, the y-axis only changes slight because the motion280is in about the same position as the starting position ofFIG. 10A, and the z-axis does not appear to change. In this example, it is noted that the image is essentially a two-dimensional representation so the z-axis does not change, but one skilled in the art recognized that in a three-dimensional virtual space, the z-axis may be similarly affected by the motion280.
FIG. 10Cfurther shows motion280bouncing off the terrain and gravity pushing the motion280into the lower area. Relative to the anchor282, the faders further adjust, the x-axis fader284further increasing, the y-axis fader286increasing and the z-axis288remaining unchanged.
For example, each of these parameters of the faders284,286and288can be assigned to other output parameters. For example, x-axis fader284can be assigned to the volume of an audio source, the y-axis fader286can be assigned to control the intensity of a light source and z-axis fader288can be assigned the a filter frequency of a DSP effect. Thus, the example of motion280and anchor282dynamically modifies output effects using the physics of motion applied in the virtual space.
In the virtual space, the processing operations may include any suitable means for connecting an avatar to an object or allowing for object to object connection (e.g. connecting a DSP effect to an audio source). One such technique may be avatar selection of the object. Another technique may be for the user to use a cursor and draw a line to connect objects. Similar to connection, disconnection may be by any suitable means. In one embodiment, when an object is connected, the processing operations automatically generate a disconnect button allowing user selection.
Where the above examples include audio objects, the present method and system applies to video objects. For example, the virtual space may include an in-game main camera. The camera may include filter slots allowing for camera filter operations. The avatar may select the camera filter, which is then applied to the main camera. Based on proximity of the avatar to the filter, the in-game camera is then modified accordingly.
By using the various objects, associating the objects with other objects or avatars, and using the proximity of the avatar to a specific object, the system dynamically generates output modified by the objects.
Further interactions with multiple objects provides for further varying of user generated content. The above embodiments describe a general architecture, but further refinements and coupling of objects and interaction parameters can provide for a limitless number of variations of content generation.
For example, one embodiment may include controlling parameters with other parameters, such as controlling one or a group of faders with a single fader. In this example, if there are multiple faders, each fader tied to a different audio or video source, a player can control multiple faders with a single fader control. Another example may be inclusion of a scaling factor between controlled faders, such as a dial is used to set a scaling factor. Using the example of a 1.5 times scaling factor, if a first fader is adjusted, the linked fader can then be automatically adjusted by a factor of 1.5 times.
A sequence of multiple faders may be paired, scaled, mirrored and otherwise interconnected for branching and scaling interactions. For example, a second, third and fourth fader are mapped to a first fader. The fourth fader is scaling up the incoming value from the first fader. Fifth and sixth faders are mapped to the fourth fader, taking on the fourth fader value after scaling of the first fader occurs. In this example, the player moves the first fader and the second through sixth faders then automatically adjust.
Another variation is node based pattern sequencing. Node based sequencing allows players to create linear or branching pulse-controlled sequences of nodes. In one embodiment, the player can create a node anywhere in the virtual world. A minimum of two nodes are needed to make a sequence. A pulse object drives the sequence, such as being connected to a start node of the node series. For example, the pulse object can be a toggle on/off, with its tempo determines the tempo of the sequence. As the pulse object uses default parameters, the player could, for example, use the avatar proximity to dynamically adjust the tempo.
A further embodiment provides for area control points within the virtual space. Multiple discrete play areas can be defined in the virtual space, where the play areas are visually separate from each other. In the virtual space, the user cannot see from one area to another, but audio can carry between the spaces. For example, while in the virtual space, the user hears the aggregate of what all the avatars are hearing at any given moment.
Within these defined areas, each area can have a control-point, to which an avatar can be attached using the proximity relationship described herein. Therefore, the avatar's proximity to an area's control-point will effect the levels of all compatible objects within the area. For example, compatible objects may include volume, panning, wet/dry levels of audio effects, wet/dry levels of camera effects, intensity of lighting, and other objects, controlled by the proximity relationship.
In one embodiment, any modifications are applied area-wide second to any local modifications various objects may be undergoing. For example, if an audio source in an area has a volume dial at halfway, then the area control-point will modify that already-reduced volume based on changes in proximity. For example, if another audio source is at full volume, then the area control-point will apply the same modification factor based on the avatar's proximity to the control point. Stated in other terms, the relative differences between volumes or other object outputs modifications, of the effected objects within the area will remain the same, as a global (area wide) modification is applied to all of the objects.
In a further embodiment, specially designated aggregate areas can contain duplicates of the control points for all other given areas in a discrete space. Thus, in the aggregate area, one or more avatars can be in a proximity relationship to the multiple control points contain within, providing meta-mixing of output levels of all other areas.
A further embodiment of the method and system provides for automated cyclical motions providing for dynamic content generation. For example, one embodiment may include a spinning platform within the virtual space. In the virtual space, the user may adjust the spin-speed and the physical radius of the platform. Any kind of object can be placed on the platform, rotating with the platform. The rotation of the object on the platform changes the object's position in relation to other objects, including for example its rotation about the Y-axis.
For example, in one embodiment if an avatar stands on an outer edge of the platform, listening to an audio source not on the platform, the spinning platform changes the proximity relationship. This change in proximity would change the volume of the audio source as the avatar travels away from the object, and then as the avatar circles around and travels toward the object. In this embodiment, the panning of the audio source is also adjusted as the avatar's relative rotation changed as it span.
A platform can also have other movements. For example, another embodiment may include a position-oscillating platform, which relates to other objects but can continually oscillate its position up/down or side-to-side at a user-controlled speed. Another embodiment may include an orbit, where an object can be directed to orbit around another object at a user-controlled speed and relative distance. The parameters can be mapped to the orbiting object's dynamic position, thereby effecting dynamic content generation based on the proximity changes from the orbital movement.
A further embodiment is anchoring the motion of one object to another. Any object type can be connected directly to a motion object and made to follow its absolute position in virtual space. For example, movements of avatars (and objects) as illustrated inFIGS. 9 and 10can include anchoring. For example, movement of the object262inFIG. 9could include having another object anchored thereto, such that the virtual space movement of the object262also mirrored by an attached object. This can also be seen in the movement ofFIGS. 10A-10Cwith motion of the object280.
FIGS. 11-17illustrate sample screen shots of the dynamic content generation in the virtual world. For example,FIG. 11shows a single avatar with a single content source and multiple toggle buttons and dials. Also visible in the screenshot is the proximity of the avatar to the object.FIG. 12shows a multiple avatar environment, showing the co-existence of multiple avatars in the virtual space.FIG. 13shows an audio DSP effect in a proximity relationship with a sound source. This particular DSP has 3 available parameters, here embodied as faders that the user can control.FIG. 14shows a pulse to a sound source with dynamic playhead positioning along the sound source's waveform. In this instance each tempo-pulse would trigger a discrete envelope of the waveform starting at the user-defined playhead position.
FIG. 15illustrates a virtual space with multiple avatars and proximity relationship with multiple camera filters.FIG. 16illustrates the avatar proximity to a fader.FIG. 17illustrates a screenshot of a pulse node sequence as described herein.
Herein, the method and system generates dynamic content by virtual objects providing content output, the virtual objects modified by proximity of an avatar in the virtual space.
FIGS. 1 through 17are conceptual illustrations allowing for an explanation of the present invention. Notably, the figures and examples above are not meant to limit the scope of the present invention to a single embodiment, as other embodiments are possible by way of interchange of some or all of the described or illustrated elements. Moreover, where certain elements of the present invention can be partially or fully implemented using known components, only those portions of such known components that are necessary for an understanding of the present invention are described, and detailed descriptions of other portions of such known components are omitted so as not to obscure the invention. In the present specification, an embodiment showing a singular component should not necessarily be limited to other embodiments including a plurality of the same component, and vice-versa, unless explicitly stated otherwise herein. Moreover, Applicant does not intend for any term in the specification or claims to be ascribed an uncommon or special meaning unless explicitly set forth as such. Further, the present invention encompasses present and future known equivalents to the known components referred to herein by way of illustration.
The foregoing description of the specific embodiments so fully reveals the general nature of the invention that others can, by applying knowledge within the skill of the relevant art(s) (including the contents of the documents cited and incorporated by reference herein), readily modify and/or adapt for various applications such specific embodiments, without undue experimentation, without departing from the general concept of the present invention. Such adaptations and modifications are therefore intended to be within the meaning and range of equivalents of the disclosed embodiments, based on the teaching and guidance presented herein.
Claims
- A computerized method for dynamic content generation during gameplay in a virtual space, the method comprising: selecting, in response to a first user input command received during the gameplay, a virtual object visual within the virtual space, the virtual object providing audio content in the virtual space, wherein the virtual object includes an audio library associated therewith and the selecting the virtual object includes selecting the audio content from the audio library;defining, in response to a second user input command received during the gameplay, an interaction parameter for the virtual object, wherein the interaction parameter adjusts an audible component of the audio content and the interaction parameter adjustable to movement along at least one axis in the virtual space;in response to a third user input command, instantiating an avatar in the virtual space, the avatar operative for movement along the at least one axis in the virtual space;pairing a location of the avatar to the interaction parameter for the virtual object, the interaction parameter for the virtual object adjusting the audio content generated by the virtual object based on movement by the avatar;receiving user input commands during the gameplay for changing a distance between the virtual object and the avatar within the virtual space;based on the interaction parameter, dynamically creating a modified audio content generated by the virtual object based on the changes in distance between the virtual object and the avatar;and providing the modified audio content to an output device external to the virtual space.
- The method of claim 1 , wherein the interaction parameter includes adjusting the audible component, the audible component including at least one of: volume, pitch, panning, sampling, synthesis parameter, and midline parameter.
- The method of claim 1 , wherein the avatar is a first avatar, the virtual space including the first avatar and a second avatar, the method further comprising: receiving an avatar switch command;changing user control from the first avatar to the second avatar;modifying the audio content generated by the virtual object based on a position of the second avatar.
- The method of claim 1 , wherein the virtual object is a first virtual object, the virtual space including the first virtual object and a second virtual object, the method further comprising: receiving a switch virtual object command;disassociating engagement with the first virtual object and engaging the second virtual object;updating the audio content based on the second virtual object.
- The method of claim 4 , wherein the second virtual object is a visual object generating display content, the method further comprising: defining a second interaction parameter for the visual object;and modifying a visual output generated by the visual object based on the second interaction parameter and a change in distance between the visual object and the avatar.
- The method of claim 5 , wherein the second interaction parameter includes at least one of: color assignment, light intensity and radius, contrast, brightness, and saturation.
- The method of claim 1 further comprising: defining a proximity factor for the virtual object;determining a proximity value based on the position of the avatar relative to the virtual object in the virtual space;and modifying the audio content generated by the virtual object based on the proximity factor and the proximity value.
- The method of claim 1 further comprising: determining a digital signal processing operation associated with the audio source, the digital signal processing operation including expressive parameters;and modifying at least one of the expressive parameters of the digital signal processing operation based on a change in position of the avatar.
- The method of claim 1 , wherein the modifying the audio content generated by the virtual object is performed external to the virtual space.
- The method of claim 1 further comprising: engaging the avatar for movement of the virtual object in the virtual space;and adjusting the modifying of the audio content generated by the virtual object based on the movement of the virtual object.
- The method of claim 10 further comprising: applying force calculations to the movement of the virtual object, the force calculations emulating real world force factors.
- A computerized method for dynamic content generation in a virtual space during gameplay, the method comprising: selecting, in response to a first user input command received during the gameplay, a first virtual object visual within the virtual space, the first virtual object providing first audio content in the virtual space, wherein the virtual object includes an audio library associated therewith and the selecting the virtual object includes selecting the audio content from the audio library;defining, in response to a second user input command during the gameplay, an interaction parameter for the first virtual object, wherein the interaction parameter adjusts an audible component of the audio first content and the interaction parameter adjustable to movement along at least one axis in the virtual space;selecting during the gameplay, a second virtual object, the second virtual object generating at least one of: second audio content and display content;pairing a location of the second virtual object to the interaction parameter for the first virtual object;receiving a third user input command during the gameplay for changing a distance between the first virtual object and the second virtual object within the virtual space;modifying the first audio content generated by the first virtual object based on a proximity relationship between the first virtual object and the second virtual object;and providing the modified audio content to an output device external to the virtual space.
- The method of claim 12 further comprising: in response to a user input command, instantiating an avatar in the virtual space, the avatar operative for movement along the at least one axis in the virtual space;receiving user input commands for changing the position of at least one of: the first virtual object and the second virtual object, within the virtual space;and modifying the first audio content generated by the first virtual object based on changes in position of the second virtual object.
- The method of claim 13 further comprising: the user input commands relate to changing the position of the avatar within the virtual space, the avatar changing the position of at least one of: the first virtual object and the second virtual object.
- A computerized system for dynamic content generation during gameplay in a virtual space, the system comprising: computer readable medium having executable instructions stored thereon;and a processing device, in response to the executable instructions, operative to: select, in response to a first user input command received during the gameplay, a virtual object visual within the virtual space, the virtual object providing audio content in the virtual space, wherein the virtual object includes an audio library associated therewith and the selecting the virtual object includes selecting the audio content from the audio library;define, in response to a second user input command received during the gameplay, an interaction parameter for the virtual object, wherein the interaction parameter adjusts an audible component of the audio content and the interaction parameter adjustable to movement along at least one axis in the virtual space;in response to a third user input command, instantiate an avatar in the virtual space, the avatar operative for movement along the at least one axis in the virtual space;pair a location of the avatar to the interaction parameter for the virtual object, the interaction parameter for the virtual object adjusting the audio content provided by the virtual object based on movement by the avatar;receive user input commands during the gameplay for changing a distance between the virtual object and the avatar within the virtual space;based on the interaction parameter, dynamically create a modified audio content generated by the virtual object based on the changes in distance between the virtual object and the avatar and provide the modified audio content to an output device external to the virtual space.
- The system of claim 15 , wherein and the interaction parameters include adjusting the audible component including at least one of: pitch, sampling, synthesis parameter, and midline parameter.
- The system of claim 15 , wherein the avatar is a first avatar, the virtual space including the first avatar and a second avatar, the processing device further operative to: receive an avatar switch command;change user control from the first avatar to the second avatar;modify the audio content generated by the virtual object based on a position of the second avatar.
- The system of claim 15 , wherein the virtual object is a first virtual object, the virtual space including the first virtual object and a second virtual object, the processing device further operative to: receive switch virtual object command;disassociate engagement with the first virtual object and engaging the second virtual object;and update the audio content based on the second virtual object.
- The system of claim 15 , the processing device further operative to: define a proximity factor for the virtual object;determine a proximity value based on the position of the avatar relative to the virtual object in the virtual space;and modify the audio content generated by the virtual object based on the proximity factor and the proximity value.
- The system of claim 15 further comprising the output device external to the processing device and operative to modify the audio content based on the avatar position changes.
- The system of claim 15 further comprising the output device external to the processing device and operative to modify the audio content based on the avatar position changes.
Disclaimer: Data collected from the USPTO and may be malformed, incomplete, and/or otherwise inaccurate.