U.S. Pat. No. 8,556,720

SYSTEM AND METHOD FOR TOUCHSCREEN VIDEO GAME COMBAT

AssigneeDisney Enterprises, Inc.

Issue DateJanuary 14, 2008

Illustrative Figure

Abstract

An interactive computerized game system including a visual display, one or more user input devices, and a processor executing software that interacts with the display and input device(s) is disclosed. The software displays images of avatars. At least one of the user input devices is a touchscreen. During gameplay, the gameplayer may touch the touchscreen to provide input. An animation is displayed in response to user input matching a predefined input.

Description

DETAILED DESCRIPTION OF CERTAIN INVENTIVE EMBODIMENTS The following detailed description of certain embodiments presents various descriptions of specific embodiments of the invention. However, the invention can be embodied in a multitude of different ways as defined and covered by the claims. In this description, reference is made to the drawings wherein like parts are designated with like numerals throughout. The terminology used in the description presented herein is not intended to be interpreted in any limited or restrictive manner, simply because it is being utilized in conjunction with a detailed description of certain specific embodiments of the invention. Furthermore, embodiments of the invention may include several novel features, no single one of which is solely responsible for its desirable attributes or which is essential to practicing the inventions herein described. The system is comprised of various modules, tools, and applications as discussed in detail below. As can be appreciated by one of ordinary skill in the art, each of the modules may comprise various sub-routines, procedures, definitional statements and macros. Each of the modules are typically separately compiled and linked into a single executable program. Therefore, the following description of each of the modules is used for convenience to describe the functionality of the preferred system. Thus, the processes that are undergone by each of the modules may be arbitrarily redistributed to one of the other modules, combined together in a single module, or made available in, for example, a shareable dynamic link library. The system modules, tools, and applications may be written in any programming language such as, for example, C, C++, BASIC, Visual Basic, Pascal, Ada, Java, HTML, XML, or FORTRAN, and executed on an operating system, such as variants of Windows, Macintosh, UNIX, Linux, VxWorks, or other operating system. C, C++, BASIC, Visual Basic, Pascal, Ada, Java, ...

DETAILED DESCRIPTION OF CERTAIN INVENTIVE EMBODIMENTS

The following detailed description of certain embodiments presents various descriptions of specific embodiments of the invention. However, the invention can be embodied in a multitude of different ways as defined and covered by the claims. In this description, reference is made to the drawings wherein like parts are designated with like numerals throughout.

The terminology used in the description presented herein is not intended to be interpreted in any limited or restrictive manner, simply because it is being utilized in conjunction with a detailed description of certain specific embodiments of the invention. Furthermore, embodiments of the invention may include several novel features, no single one of which is solely responsible for its desirable attributes or which is essential to practicing the inventions herein described.

The system is comprised of various modules, tools, and applications as discussed in detail below. As can be appreciated by one of ordinary skill in the art, each of the modules may comprise various sub-routines, procedures, definitional statements and macros. Each of the modules are typically separately compiled and linked into a single executable program. Therefore, the following description of each of the modules is used for convenience to describe the functionality of the preferred system. Thus, the processes that are undergone by each of the modules may be arbitrarily redistributed to one of the other modules, combined together in a single module, or made available in, for example, a shareable dynamic link library.

The system modules, tools, and applications may be written in any programming language such as, for example, C, C++, BASIC, Visual Basic, Pascal, Ada, Java, HTML, XML, or FORTRAN, and executed on an operating system, such as variants of Windows, Macintosh, UNIX, Linux, VxWorks, or other operating system. C, C++, BASIC, Visual Basic, Pascal, Ada, Java, HTML, XML and FORTRAN are industry standard programming languages for which many commercial compilers can be used to create executable code.

All of these embodiments are intended to be within the scope of the invention herein disclosed. These and other embodiments of the present invention will become readily apparent to those skilled in the art from the following detailed description of the preferred embodiments having reference to the attached figures, the invention not being limited to any particular preferred embodiment(s) disclosed.

Various embodiments of the invention provide a system and method of gameplay which is carried out using a touchscreen device to control the actions of an avatar displayed onscreen. Embodiments may help create an experience in which the actions of the avatar emulate the physical actions performed by the gameplayer.

Embodiments of the invention generally involve a visual display configured to display an image of at least one avatar. As used in this description, the term “avatar” can refer to any image, graphic, device, symbol and/or the like that represents a character that is displayed onscreen. The character displayed may be a human character or may be a fantastical character including but not limited to monsters and trolls. In certain embodiments of the invention, an avatar may represent the gameplayer's on-screen in-game persona. In other embodiments, avatars may represent other in-game personas the gamplayer has no control over. In another embodiment, the gamplayer may control a plurality of avatars and these avatars may each represent a different in-game persona.

With reference now toFIG. 1, a block diagram illustrates an exemplary system for playing a video game such as, e.g., a game machine, handheld game device, console game device, and the like. Another platform for gameplay could be a television set with internal or external software application execution capability. The device100generally includes a visual display102connected to a processor106. At least a portion of the visual display102is a touchscreen. The visual display102may be configured to display an image an avatar. The touchscreen portion of the visual display102may be configured to receive user input in response to the displayed image of an avatar. The touchscreen may, for example, be configured to display a variety of graphical user input elements in response to the gamplayer's selection of an avatar. For example, the touchscreen portion may include a display of one or more targets or sliders. The processor106may execute software configured to display an animation in response to the user input received which matches a predefined input. There may be an animation displayed when any user input matches a predefined input or pattern, or for each user input which matches a predefined input.

The animation displayed may be real-time with respect to the user input. By “real time” it is meant that the animation is created dynamically while the user input is received. In this manner the animation continues to change as a user manipulates a continuous control such as a slider bar. The animation may stop immediately when the user input stops, or it may continue to a predefined completion point once the user input stops.

The physical actions the gameplayer may take in providing the user input emulates the animations displayed. By “emulate” it is meant that the physical action taken by the gamplayer indicates or represents the animation displayed. The physical action taken by the gameplayer may not exactly represent the animation displayed. For example, the physical action of tapping a graphical user input element on the screen may correspond to a stabbing or thrusting animation. Or, the physical action of sliding a graphical user input element may correspond to a slashing or sweeping animation.

FIG. 2illustrates an alternative embodiment, in which a device200includes a visual display202, a processor206, and at least one user input device208. At least a portion of the visual display202is a touchscreen. The user input device208may comprise any device suitable for receiving user input, for example, a button or set of buttons, a keypad, or a directional pad (“D-pad”). In an embodiment including a D-pad input device, for example, each leg of the D-pad may be configured to control the movement of a particular avatar displayed on the visual display202. For example, the upper leg of the D-pad may be configured to move an avatar in a forward direction on the visual display202.

Embodiments of the invention may include a visual display having two or more visually separate portions. As illustrated inFIG. 3, a device300may include a visual display having a non-touchscreen portion302and a visually separate touchscreen portion304. The device300may further include speakers310.

In certain embodiments of the invention, the user input comprises input to a touchscreen. Touchscreen input can be achieved with a stylus, with an adapted stylus in the form of a pointing object, with a finger, or the like. Herein, the specification will use the term “stylus” to refer to any object used to provide input to the touchscreen. The specification in no way limits the types of objects which may be used to provide input to the touchscreen.

Many game systems provide a controller pad to receive user input. A controller pad generally comprises a directional pad and a variety of buttons and keys. Some game systems provide a touchscreen which allows the user to use a stylus to provide input. Traditional games solely use the controller pad to accept user input. One embodiment according to the invention incorporates the touchscreen into the gameplay. Another embodiment provides at least one graphical user input element on the touchscreen. The gameplayer interacts with the touchscreen and the graphical user input elements displayed on the touchscreen such that the actions of the avatar emulate the actions of the gameplayer. The gameplayer feels a direct physical connection between his actions and the actions of the avatar because the avatar's actions displayed onscreen are emulating the actions of the gameplayer. This creates gameplay that is more immersive and interactive than the traditional methods of gameplay.

FIG. 4Aillustrates a visual display of an embodiment as used in a game designed for a portable game machine such as the Nintendo DS™. The visual display has a non-touchscreen portion401and a touchscreen portion403. As can be seen in the figure, the touchscreen portion403displays an image of at least one avatar.

FIG. 4Billustrates a visual display of an embodiment as used in a game designed for a portable game machine such as the Nintendo DS™. The visual display has a non-touchscreen portion402and a touchscreen portion406. The visual display shown inFIG. 4Bis displayed in response to a user input received from the touchscreen portion shown inFIG. 4A. The user selects an avatar on the touchscreen shown inFIG. 4Aand the visual display fromFIG. 4Bis displayed. As can be seen in the figure, the non-touchscreen portion402displays an image of at least one avatar. A first avatar414and a second418are displayed on the non-touchscreen portion402. Touchscreen portion406displays at least one graphical user input element. Targets410are displayed on the touchscreen portion406of the visual display. Each of the targets410correspond to a predefined input. As the user makes contact with or taps the touchscreen portion of the display at the area of the touchscreen that corresponds to the target410, avatar414will perform a stab or thrust attack on avatar418. The action the avatar414performs emulates the physical action of the user tapping the touch screen. There may be a plurality of targets410displayed on the touchscreen portion. Each time the user input corresponds with tapping a target410, the avatar displayed on the non-touchscreen portion will perform a stab or thrust attack.

FIG. 4Cfurther illustrates a visual display of another embodiment as used in a game designed for a portable game machine such as the Nintendo DS™. Similar toFIG. 4AandFIG. 4B, the visual display has a non-touchscreen portion418and a touchscreen portion422, and a first avatar438and a second avatar442are displayed on the non-touchscreen portion422. The visual display shown inFIG. 4Cis displayed after the animations displayed inFIG. 4B. Touchscreen portion422displays at least one graphical user input element. A slider430is displayed on the touchscreen portion422. The slider430has a starting point426, and ending point422and a sliding component434. The slider430corresponds to a predefined input. The slider430may have different lengths and patterns. For example, the slider430may be in the shape of a spiral, or it may be simply be a series of connected lines. As the user makes contact with or slides the stylus across the touchscreen portion of the display at the area of the touchscreen that corresponds to the slider, the avatar438will perform a slash or slice attack on avatar442. The slicing or slashing action the avatar438performs emulates the physical action of the user sliding or dragging the stylus across the touchscreen portion.

FIG. 4Dfurther illustrates a visual display of still another embodiment as used in a game designed for a portable game machine such as the Nintendo DS™. Similar toFIGS. 4A and 4Bthe visual display has a non-touchscreen portion442and a touchscreen portion446, and a first avatar450and a second avatar454are displayed on the non-touchscreen portion446. Touchscreen portion446displays at least one graphical user input element. A slider458is displayed on the touchscreen portion446. The slider458has a starting point462, and ending point466and a sliding component470. The slider458corresponds to a predefined input. The slider458may have different lengths and patterns. For example, the slider458may be in the shape of a spiral, or it may simply be a series of connected lines. As the user makes contact with or slides the stylus across the touchscreen portion of the display at the area of the touchscreen that corresponds to the slider, the avatar450will guard against an attack from avatar442. The guarding action the avatar450performs emulates the physical action of the user sliding or dragging the stylus across the touchscreen portion.

FIG. 5Aillustrates a basic gameplay process500according to an embodiment of the invention. At block508at least one avatar is displayed. At block512, user input is received. The user input comprises input to a touch screen. At block516, the user input is evaluated to check for a selection of one of the avatars displayed at block508. In the embodiment illustrated inFIG. 4A, for example, the user contacts the touchscreen at the location where avatar404is displayed in order to select the avatar404.

Referring again toFIG. 5A, if the user input corresponds with the selection of an avatar, then the process moves to block520, wherein at least one graphical user input element is displayed. At block524, user input is received. At block528, the user input is evaluated to determine if a predefined user input was received. For example, as illustrated inFIG. 4B, targets410are displayed on touchscreen406. The user may contact the touchscreen at one of the targets410. Or, as illustrated inFIGS. 4C and 4D, sliders430and458are displayed on touchscreens422and446respectively. InFIG. 4Cthe user may contact the touchscreen422at starting point426and drag the stylus across the touchscreen422such that the sliding element434moves from the starting point426to the ending point422along the slider430. Similarly, inFIG. 4Dthe user contacts the touchscreen446at starting point462and drags the stylus across the touchscreen446such that the sliding element470moves from the starting point462to the ending point466along the slider458. The movement of the sliding element from the starting point to the ending point on the sliders comprises a predefined input.

Referring again toFIG. 5A, if the user input matches a predefined input, then the process moves to block532, wherein an animation is displayed on the non-touchscreen portion of the visual display.

FIG. 5Billustrates a basic gameplay process550according to another embodiment of the invention. The process described inFIG. 5Bis similar to the process inFIG. 5A, but differs such that a time limit is imposed on the user to interact with the touchscreen and provide user input. At block558at least one avatar is displayed. At block562, user input is received. The user input comprises input to a touch screen. At block566, the user input is evaluated to check for a selection of one of the avatars displayed at block508. In the embodiment illustrated inFIG. 4A, for example, the user may contact the touchscreen at the location where avatar404is displayed in order to select the avatar404.

Referring again toFIG. 5B, if the user input corresponds with the selection of an avatar, then the process moves to block570, wherein at least one graphical user input element is displayed. At block574, user input is received. At block578, the user input is evaluated to determine if a predefined user input was received. For example, as illustrated inFIG. 4B, targets410are displayed on touchscreen406. The user may contact the touchscreen at one of the targets410. Or, as illustrated inFIGS. 4C and 4D, sliders430and458are displayed on touchscreens422and446respectively. InFIG. 4Cthe user may contact the touchscreen422at starting point426and drag the stylus across the touchscreen422such that the sliding element434moves from the starting point426to the ending point422along the slider430. Similarly, inFIG. 4Dthe user may contact the touchscreen446at starting point462and drag the stylus across the touchscreen446such that the sliding element470moves from the starting point462to the ending point466along the slider458. The movement of the sliding element from the starting point to the ending point on the sliders comprises a predefined input.

Referring again toFIG. 5B, at block582, a timer is evaluated to check the amount of time remaining on the timer. If there is time remaining on the timer, the process moves to block586, wherein an animation is displayed on the non-touchscreen portion of the visual display. At block590, the timer is evaluated to check the amount of time remaining on the timer. If there is no time remaining on the timer, the process ends. If there is time remaining on the timer, the process will move back to block574and loop through blocks574to block590until the timer expires.

It will be understood that numerous and various modifications can be made from those previously described embodiments and that the forms of the invention described herein are illustrative only and are not intended to limit the scope of the invention.

The above-described method may be realized in a program format to be stored on a computer readable recording medium that includes any kinds of recording devices for storing computer readable data, for example, a CD-ROM, a DVD, a magnetic tape, memory card, and a disk, and may also be realized in a carrier wave format (e.g., Internet transmission or Bluetooth transmission).

While specific blocks, sections, devices, functions and modules may have been set forth above, a skilled technologist will realize that there are many ways to partition the system, and that there are many parts, components, modules or functions that may be substituted for those listed above.

While the above detailed description has shown, described, and pointed out the fundamental novel features of the invention as applied to various embodiments, it will be understood that various omissions and substitutions and changes in the form and details of the system illustrated may be made by those skilled in the art, without departing from the intent of the invention.

Claims

  1. A computer-implemented method for interacting with a virtual environment, the method comprising: displaying, via a touchscreen display device, a plurality of avatars within the virtual environment, the plurality of avatars including a first avatar and a second avatar;determining a series of different actions of the second avatar relative to the first avatar;serially displaying, on a touchscreen portion of the touchscreen display device, a series of different graphical user interface (GUI) elements, each GUI element corresponding to a respective action of the series of actions, each GUI element triggerable by a respective predefined input pattern corresponding to a real world physical action, wherein at least one of the GUI elements comprises a slider GUI element;detecting, for each of the displayed series of GUI elements, the respective predefined input pattern performed by the user and that triggers the respective GUI element;and causing, for each detected respective predefined input pattern, the first avatar to emulate the respective real world physical action to respond to the action of the second avatar.
  1. The computer-implemented method of claim 1 , wherein the series of GUI elements includes a first slider GUI element and a second slider GUI element, wherein the first and second slider GUI elements have distinct predefined input patterns, wherein any real world physical action triggering the first slider GUI element does not trigger the second slider GUI element, wherein any real world physical action triggering the second slider GUI element does not trigger the first slider GUI element.
  2. The computer-implemented method of claim 2 , wherein the respective predefined input pattern comprises one or more line segments, and the respective real world physical action comprises moving a slider element along the one or more line segments.
  3. The computer-implemented method of claim 3 , wherein a first action of the series of actions of the second avatar is an attack on the first avatar.
  4. The computer-implemented method of claim 4 , wherein the emulated respective real world physical action of the first avatar comprises at least one of a defensive maneuver and an attack maneuver performed on the second avatar in response to the first action.
  5. The computer-implemented method of claim 5 , wherein causing the at least one avatar to emulate the respective real world physical action performed by the user comprises dynamically animating, on a non-touchscreen portion of the touchscreen display device, the first avatar in real-time while the user inputs the respective predefined input pattern on the touchscreen portion of the touchscreen display device.
  6. The computer-implemented method of claim 6 , further comprising causing the at least one avatar to continue emulating the respective real world physical action performed by the user after the user has inputted the respective predefined input pattern and until a predefined completion point is reached.
  7. The computer-implemented method of claim 7 , wherein displaying the series of GUI elements occurs in response to receiving a selection of the first avatar through the touchscreen portion of the touchscreen display device.
  8. A non-transitory computer-readable medium including instructions that, when executed by a processing unit, cause the processing unit to perform the steps of: displaying, via a touchscreen display device, a plurality of avatars within the virtual environment, the plurality of avatars including a first avatar and a second avatar;determining a series of different actions of the second avatar relative to the first avatar;serially displaying, on a touchscreen portion of the touchscreen display device, a series of different graphical user interface (GUI) elements, each GUI element corresponding to a respective action of the series of actions, each GUI element triggerable by a respective predefined input pattern corresponding to a real world physical action, wherein at least one of the GUI elements comprises a slider GUI element;detecting, for each of the displayed series of GUI elements, the respective predefined input pattern performed by the user and that triggers the respective GUI element;and causing, for each detected respective predefined input pattern, the first avatar to emulate the respective real world physical action to respond to the action of the second avatar.
  9. The non-transitory computer-readable medium of claim 9 , wherein the series of GUI elements includes a first slider GUI element and a second slider GUI element, wherein the first and second slider GUI elements have distinct predefined input patterns, wherein any real world physical action triggering the first slider GUI element does not trigger the second slider GUI element, wherein any real world physical action triggering the second slider GUI element does not trigger the first slider GUI element.
  10. The non-transitory computer-readable medium of claim 10 , wherein the respective predefined input pattern comprises one or more line segments, and the respective real world physical action comprises moving a slider element along the one or more line segments.
  11. The non-transitory computer-readable medium of claim 11 , wherein a first action of the series of actions of the second avatar is an attack on the first avatar.
  12. The non-transitory computer-readable medium of claim 12 , wherein the emulated respective real world physical action of the first avatar comprises at least one of a defensive maneuver and an attack maneuver performed on the second avatar in response to the first action.
  13. The non-transitory computer-readable medium of claim 13 , wherein causing the at least one avatar to emulate the respective real world physical action performed by the user comprises dynamically animating, on a non-touchscreen portion of the touchscreen display device, the first avatar in real-time while the user inputs the respective predefined input pattern on the touchscreen portion of the touchscreen display device.
  14. The non-transitory computer-readable medium of claim 14 , further comprising causing the at least one avatar to continue emulating the respective real world physical action performed by the user after the user has inputted the respective predefined input pattern and until a predefined completion point is reached.
  15. The computer-implemented method of claim 15 , wherein displaying the series of GUI elements occurs in response to receiving a selection of the first avatar through the touchscreen portion of the touchscreen display device.
  16. A system for interacting with a virtual environment, comprising: a one or more computer processors: and a memory containing a program, which, when executed by the one or more computer processors, performs an operation for interacting with a virtual environment, the operation comprising: displaying, via a touchscreen display device, a plurality of avatars within the virtual environment, the plurality of avatars including a first avatar and a second avatar;determining a series of different actions of the second avatar relative to the first avatar;serially displaying, on a touchscreen portion of the touchscreen display device, a series of different graphical user interface (GUI) elements, each GUI element corresponding to a respective action of the series of actions, each GUI element triggerable by a respective predefined input pattern corresponding to a real world physical action, wherein at least one of the GUI elements comprises a slider GUI element;detecting, for each of the displayed series of GUI elements, the respective predefined input pattern performed by the user and that triggers the respective GUI element;and causing, for each detected respective predefined input pattern, the first avatar to emulate the respective real world physical action to respond to the action of the second avatar.
  17. The system of claim 17 , wherein the series of GUI elements includes a first slider GUI element and a second slider GUI element, wherein the first and second slider GUI elements have distinct predefined input patterns, wherein any real world physical action triggering the first slider GUI element does not trigger the second slider GUI element, wherein any real world physical action triggering the second slider GUI element does not trigger the first slider GUI element.
  18. The system of claim 18 , wherein the respective predefined input pattern comprises one or more line segments, and the respective real world physical action comprises moving a slider element along the one or more line segments.
  19. The system of claim 19 , wherein a first action of the series of actions of the second avatar is an attack on the first avatar.
  20. The system of claim 20 , wherein the emulated respective real world physical action of the first avatar comprises at least one of a defensive maneuver and an attack maneuver performed on the second avatar in response to the first action.
  21. The system of claim 21 , wherein causing the at least one avatar to emulate the respective real world physical action performed by the user comprises dynamically animating, on a non-touchscreen portion of the touchscreen display device, the first avatar in real-time while the user inputs the respective predefined input pattern on the touchscreen portion of the touchscreen display device.
  22. The system of claim 22 , wherein cause the at least one avatar to continue emulating the respective real world physical action performed by the user after the user has inputted the respective predefined input pattern and until a predefined completion point is reached.
  23. The system of claim 23 , wherein displaying the series of GUI elements in response to receiving a selection of the first avatar through the touchscreen portion of the touchscreen display device.

Disclaimer: Data collected from the USPTO and may be malformed, incomplete, and/or otherwise inaccurate.