U.S. Pat. No. 7,690,975

IMAGE DISPLAY SYSTEM, IMAGE PROCESSING SYSTEM, AND VIDEO GAME SYSTEM

AssigneeSony Computer Entertainment Inc.

Issue DateOctober 4, 2005

Illustrative Figure

Abstract

According to a function 188 which stores in the first memory for difference, pickup image data from CCD camera 42 based on a predetermined timing, a function 190 which stores in the second memory for difference, pickup image data from CCD camera 42 based on another timing, a function 192 which obtains a difference between the pickup image data stored in the first memory for difference 24 and the pickup image data stored in the second memory for difference 26, a function 194 for specifying an image having moved based on the data as a difference, a function 196 which determines whether or not the image having moved is touching the character image, and a function 200 which increases a value of parameters such as experiential value, physical energy, and offensive power, when it is determined that the image having moved comes into contact with the character image, it is possible to expand a range of card game used to be played only in a real space up to a virtual space, and offer a new game which merges the card game and video game.

Description

BEST MODE FOR CARRYING OUT THE INVENTION In the following, preferred embodiments will be described in detail in which an image display system and image processing system relating to the present invention have been applied to a video game system, with reference to the accompanying drawings,FIG. 1toFIG. 29. As shown inFIG. 1, the video game system10relating to the present embodiment includes a video game machine12and various external units14. The video game machine12includes CPU16which executes various programs, main memory18which stores various programs and data, image memory20in which image data is recorded (drawn), I/O port22which exchanges data with the various external units14, a first memory for difference24and a second memory for difference26to perform differential calculation as described below. Various external units14connected to the I/O port22includes, a monitor30which is connected via a display-use interface (I/F)28, an optical disk drive34which carries out reproducing/recording from/on an optical disk (DVD-ROM, DVD-RW, DVD-RAM, CD-ROM, and the like)32, memory card38being connected via a memory card-use interface (I/F)36, CCD camera42being connected via pickup-use interface (I/F)40, hard disk drive (HDD)46which carries out reproducing/recording from/on the hard disk44, and a speaker50being connected via the audio-use interface48. It is a matter of course that connection may be established with the Internet (not illustrated) from the I/O port22via a router not illustrated. Data input and output to/from the external units14and data processing and the like within the video game machine12are carried out by way of the CPU16and main memory18. In particular, pickup data and image data are recorded (drawn) in the image memory20. Next, characteristic functions held by the video game system10relating to the present embodiment will be explained with reference toFIG. 2toFIG. 31, that is, functions being implemented by programs provided to the video game machine12, via a recording medium such as optical disk32, memory card38, hard disk44, being available for random ...

BEST MODE FOR CARRYING OUT THE INVENTION

In the following, preferred embodiments will be described in detail in which an image display system and image processing system relating to the present invention have been applied to a video game system, with reference to the accompanying drawings,FIG. 1toFIG. 29.

As shown inFIG. 1, the video game system10relating to the present embodiment includes a video game machine12and various external units14.

The video game machine12includes CPU16which executes various programs, main memory18which stores various programs and data, image memory20in which image data is recorded (drawn), I/O port22which exchanges data with the various external units14, a first memory for difference24and a second memory for difference26to perform differential calculation as described below.

Various external units14connected to the I/O port22includes, a monitor30which is connected via a display-use interface (I/F)28, an optical disk drive34which carries out reproducing/recording from/on an optical disk (DVD-ROM, DVD-RW, DVD-RAM, CD-ROM, and the like)32, memory card38being connected via a memory card-use interface (I/F)36, CCD camera42being connected via pickup-use interface (I/F)40, hard disk drive (HDD)46which carries out reproducing/recording from/on the hard disk44, and a speaker50being connected via the audio-use interface48. It is a matter of course that connection may be established with the Internet (not illustrated) from the I/O port22via a router not illustrated.

Data input and output to/from the external units14and data processing and the like within the video game machine12are carried out by way of the CPU16and main memory18. In particular, pickup data and image data are recorded (drawn) in the image memory20.

Next, characteristic functions held by the video game system10relating to the present embodiment will be explained with reference toFIG. 2toFIG. 31, that is, functions being implemented by programs provided to the video game machine12, via a recording medium such as optical disk32, memory card38, hard disk44, being available for random access, and further via a network such as the Internet, and Intranet.

Firstly, a card54used in this video game system10will be explained. This card54has a size and a thickness being the same as a card used in a general card game. As shown inFIG. 2A, on the front face, there is printed a picture representing a character being associated with the card54. As shown inFIG. 2B, the identification image56is printed on the reverse side. It is a matter of course that a transparent card is also available. In this case, only the identification image56is printed.

Patterns of two-dimensional code (hereinafter abbreviated as “2D code”) as shown inFIG. 2Bconfigure the identification image56. One unit of the identification image56is assumed as one block, and logo part58and code part60are arranged in such a manner as being separated by one block within a range of rectangle, 9.5 blocks length vertically, and seven blocks length horizontally. In the logo part58, there is provided a black colored reference cell62, being 2D code for notifying the reference position of the code part60and the orientation of the card54, with a shape of large-sized rectangle having a length corresponding to 1.5 blocks vertically and a length corresponding to 7 blocks horizontally. There is also a case that a name of character, a mark (logo) for advertisement, or the like, is printed in the logo part58, for example.

The code part60is in a square range having seven blocks both vertically and horizontally, and at each of the corner sections, corner cells64each being a black square, for example, for recognizing identification information, are placed. Furthermore, identification cells66, each being black square for example, are provided in the area surrounded by four corner cells64in such a manner as being two-dimensionally patterned, so as to recognize the identification information.

Since a method for detecting a position of the identification image56from the pickup image data, a method for detecting the images at the corner cells64, and a method for detecting the 2D pattern of the identification cells66are described in detail in the Patent Document 1 (Japanese Patent Laid-open Publication No. 2000-82108) as mentioned above, it is advised that the Patent Document 1 is referred to.

In the present embodiment, an association table is registered, which associates various 2D patterns of the identification cells66with the identification numbers respectively corresponding to the patterns, for example, in a form of database68(2D code database, seeFIG. 6andFIG. 7), in the hard disk44, optical disk32, and the like. Therefore, by collating a detected 2D pattern of the identification cells66with the association table within the database68, the identification number of the card54is easily detected.

As shown inFIG. 3AandFIG. 3B, the functions implemented by the video game system is to pick up images by the CCD camera42, for example, of six cards541,542,543,544,545, and546, which are placed on a desk, table or the like52, to display thus picked up images in the monitor30. Simultaneously, on the images of the cards541to546displayed in the screen of the monitor30, for example, on the identification images561,562,563,564,565, and566respectively attached to the cards541,542,543,544,545, and546, images of objects (characters)701,702,703,704,705, and706are displayed respectively associated with the identification images561to566of the cards541to546in such a manner as being superimposed thereon. According to the displaying manner as described above, it is possible to achieve a game which is a mixture of a game and a video game. Here, the “character” indicates an object such as a human being, an animal, and a hero or the like who appears in a TV show, animated movie, and the like.

As shown inFIG. 3A, the CCD camera42is installed on stand72which is fixed on the desk, table, or the like52. Imaging surface of the CCD camera42may be adjusted, for example, by users74A and74B, so as to be oriented to the part on which the cards541to546. It is a matter of course that, as shown inFIG. 4A, an image of the user74who holds one card542, for example, is picked up, so as to be seen in the monitor30, thereby as shown inFIG. 4B, displaying the image of the user74, the identification image562of the card542, and the character image702. Accordingly, it is possible to create a scene such that a character is put on the card542held by the user74.

The functions of the present embodiment as described above are achieved, when the CPU16executes an application program to implement those functions, out of various programs installed in the hard disk44for example.

As shown inFIG. 5, the application program80includes, card recognition program82, character appearance display program84, character action display program86, the first card position forecasting program88, the second card position forecasting program90, image motion detecting program92, card motion detecting program94, card re-recognition program96, character changing program98, and information table reference program100.

Here, functions of the application program80will be explained with reference toFIG. 5toFIG. 29.

Card Recognition Program

Firstly, the card recognition program82is to perform processing for recognizing the identification image561of the card (for example, card541inFIG. 3A) placed on the desk, table, or the like52, so as to specify a character image (for example image701inFIG. 3B) to be displayed on the identification image561. As shown inFIG. 6, the card recognition program82includes a pickup image drawing function102, reference cell finding function104, identification information detecting function106, camera coordinate detecting function108, and character image searching function110. Here, the term “recognition” indicates to detect an identification number and the orientation of the card541from the identification image561of the card541, having been detected from the pickup image data drawn in the image memory20.

The pickup image drawing function102sets up in the image memory20an image of an object being picked up as a background image, and draws the image. As one processing for setting the image as the background image, setting Z value in Z-buffering is taken as an example.

As described above, the reference cell finding function104finds out image data of the reference cell62of the logo part58from the image data drawn in the image memory20(pickup image data), and detects a position of the image data of the reference cell62. The position of the image data of the reference cell62is detected as a screen coordinate.

As shown inFIG. 7, the identification information detecting function106detects image data of the corner cells64based on the position of the image data of the reference cell62having been detected. Image data of the area112formed by the reference cell62and the corner cells64is subjected to affine transformation, assuming the image data as being equivalent to the image114which is an image viewing the identification image561of the card541from upper surface thereof, and 2D pattern of the code part60, that is, code116made of 2D patterns of the corner cells64and the identification cells66is extracted. Then, identification number and the like are detected from thus extracted code116. As described above, detection of the identification number is carried out by collating the code116thus extracted with the 2D code database68.

As shown inFIG. 8, the camera coordinate detection function108obtains a camera coordinate system (six axial directions: x, y, z, θx, θy and θz) having a camera viewing point C0as an original point based on the detected screen coordinate and focusing distance of the CCD camera42. Then, the camera coordinate of the identification image561at the card541is obtained. At this moment, the camera coordinate at the center of the logo part58in the card541and the camera coordinate at the center of the code part60are obtained.

Since a method for obtaining the camera coordinate of the image from the screen coordinate of the image drawn in the image memory20, and a method for obtaining a screen coordinate on the image memory20from the camera coordinate of a certain image are described in detail in the Patent Document 2 (Japanese Patent Laid-open Publication No. 2000-322602) as mentioned above, it is advised that the Patent Document 2 is referred to.

The character image searching function110searches the object information table118for a character image (for example, the character image701as shown inFIG. 3B), based on the identification number thus detected.

For example as shown inFIG. 9, a large number of records are arranged to constitute elements of the object information table118, and in one record, an identification number, basic parameters (experiential value, level), a storage head address of character image data (level 1), parameters of level 1 (physical energy, offensive power, defensive power, and the like), a storage head address of character image data (level 2), parameters of level 2 (physical energy, offensive power, defensive power, and the like), a storage head address of character image data (level 3), parameters of level 3 (physical energy, offensive power, defensive power, and the like), and a valid/invalid bit are registered.

As image data, image data corresponding to a level which is one of basic parameters is read out. In other words, if the level is 1, the image data is read out from the storage head address corresponding to level 1, and further, a parameter of level 1 is read out. As to level 2 and level 3, reading is performed in similar manner. The valid/invalid bit indicates a bit to determine whether or not the record is valid. When “invalid” is set, for example, it may include a case that an image of the character is not ready at the current stage, or for example, the character is beaten and killed in a battle which is set in the video game.

The character image searching function110searches the object information table118for a record associated with the identification number, and if thus searched record is “valid”, the image data120is read out from the storage head address corresponding to the current level, out of the multiple storage head addresses registered in the record. For instance, image data120associated with the character is read out from the storage head address, out of the data file122which is recorded in the hard disk44, optical disk32, and the like, and in which a large number of image data items are registered. If the record thus searched is “invalid”, the image data120is not allowed to be read out.

Therefore, when one card541is placed on a desk, table, or the like52, the card recognition program82is started, and a character image701is identified, which is associated with the identification number and the level (hereinafter, referred to as “identification number, and the like”), specified by the identification image561of the card541thus placed. If the character image is not changed according to the level, in each record in the object information table118, only one storage head address is registered as to the image120. Therefore, in such a case, the character image701associated with the identification number is specified by the identification image561of the card541.

According to the card recognition program82, it is possible to exert a visual effect merging the real space and the virtual space. Then, control is passed from this card recognition program82to various application programs (character appearance display program84, character action display program86, the first card position forecasting program88, the second card position forecasting program90, and the like).

Character Appearance Display Program

The character appearance display program84creates a display that a character image701associated with the identification number and the like which are specified by the identification image561of the detected card (for example, card541), appears in such a manner as being superimposed on the identification image561of the card541. As shown inFIG. 10, the character appearance display program84includes an action data string searching function124, an appearance posture setup function126, 3D image setup function128, image drawing function130, image displaying function132, and repetitive function134.

The action data string searching function124searches the appearance action information table136for an action data string for displaying a scene in which the character appears, based on the identification number and the like. Specifically, at first, the action data string searching function124searches the appearance action information table136for a record associated with the identification number and the like, the table being recorded in the hard disk44, optical disk32, or the like, and registering a storage head address of action data string for each record. Then, the action data string searching function124reads out from the storage head address registered in the record thus searched, an action data string138representing an action where the character image701appears, out of the data file140which is recorded in the hard disk44, optical disk32, or the like, and a large number of action data strings138are registered.

The appearance posture setup function126sets one posture in a process where the character image701appears. For example, based on the action data of i-th frame (i=1, 2, 3 . . . ) of the action data string138thus readout, vertex data of the character image701is moved on the camera coordinate system, so that one posture is setup.

The 3D image setup function128sets up a three-dimensional image of one posture in a process where the character image701appears on the identification image561of the card541, based on the camera coordinate of the identification image561of the card541thus detected.

The image drawing function130allows the three-dimensional image of one posture in a process where the character image701appears to be subjected to a perspective transformation into an image on the screen coordinate system, and draws thus transformed image in the image memory20(including a hidden surface processing). At this timing, Z value of the character image701in Z-buffering is reconfigured to be in the unit of frame, thereby presenting a scene where the character image701gradually appears from below the identification image561of the card541.

The image display function132outputs the image drawn in the image memory20in a unit of frame to the monitor30via the I/O port22, and displays the image on the screen of the monitor30.

The repetitive function134sequentially repeats the processing of the appearance posture setup function126, the processing of the 3D image setup function128, the processing of the image drawing function130, and the processing of the image display function132. Accordingly, it is possible to display a scene where the character image701associated with the identification number and the like of the card541appears on the identification image561of the card541.

Character Action Display Program

The character action display program86is to display a scene where the character performs following actions; waiting, attacking, enchanting, protecting another character, and the like. As shown inFIG. 11, being almost similar to the aforementioned character appearance display program84, the character action display program86includes, action data string searching function142, posture setup function144, 3D image setup function146, image drawing function148, image display function150, and repetitive function152.

The action data string searching function142searches various action information tables154associated with each scene, for an action data string to display the scene where the character performs following actions; waiting, attacking, enchanting, and protecting another character.

Specifically, at first, an action information table154associated with the action to be displayed is identified from the various action information tables154, which are recorded for example in the hard disk44, optical disk32, and the like, and in which a storage head address of action data string is registered for each record. Furthermore, the action data string searching function142searches thus identified action information table154for a record associated with the identification number and the like.

Then, out of the data file158which is recorded in the hard disk44, optical disk32, or the like and in which a large number of action data strings156are registered, the action data string searching function142reads out from the head address registered in the record thus searched, the action data string156which is associated with the action to be displayed for this time (the character's action, such as waiting, attacking, enchanting, or protecting another character).

The posture setup function144sets, for example as to a character regarding the card541, one posture in a process while the character image701is waiting, one posture in a process while the character image is attacking, one posture in a process while the character image is enchanting, and one posture in a process while the character image is protecting another character. For instance, based on the action data of the i-th frame (i=1, 2, 3 . . . ) of the action data string156thus read out, the vertex data of the character image701is moved on the camera coordinate system and one posture is set up.

The 3D image setup function146sets three-dimensional images of one posture on the identification image561of the card541, in a process while the character image701is waiting, one posture in a process while the character image is attacking, one posture in a process while the character image is enchanting, and one posture in a process while the character image is protecting another character, based on the camera coordinate of the identification image561on the card541thus detected.

The image drawing function148allows the 3D images of one posture in a process while the character image701is waiting, one posture in a process while the character image is attacking, one posture in a process while the character image is enchanting, and one posture in a process while the character image is protecting another character, to be subjected to perspective transformation into images on the screen coordinate system, and draws thus transformed images into the image memory20(including hidden surface processing).

The image display function150outputs the image drawn in the image memory20in a unit of frame to the monitor30via the I/O port22, and displays the image on the screen of the monitor30.

The repetitive function152sequentially repeats the processing of the posture setup function144, the processing of the 3D image setup function146, the processing of the image drawing function148, and the processing of the image display function150. Accordingly, it is possible to display scenes where the character image701is waiting, attacking, enchanting, and protecting another character.

With the aforementioned card recognition program82, the character appearance display program84, and the character action display program86, it is possible to allow a character in a card game to appear in a scenario of a video game, and perform various actions. In other words, a card game which has been enjoyed only in a real space can be spread to the virtual space, thereby offering a new type of game merging the card game and the video game.

Next, the first card position forecasting program88will be described. As shown inFIG. 12, the program88is configured assuming a case where three cards are placed side by side for example.

As shown inFIG. 13A, when the aforementioned card recognition program82detects a position of the identification image561of one card541, based on this identification image561, the first card position forecasting program88forecasts placing positions of the other cards542and543(seeFIG. 12). As shown inFIG. 14, the program88includes detection area setup function162, reference cell finding function164, identification information detecting function166, and character image searching function168.

Here, processing operations of the first card position forecasting program88, in particular, as shown inFIG. 13A, operations after the camera coordinate of the identification information561of one card541is detected by the card recognition program82, for example, will be explained with reference toFIG. 15.

In step S1ofFIG. 15, the detection area setup function162obtains the camera coordinate of the detection area170being a certain range including the identification image561as shown inFIG. 13A, based on the camera coordinate of the identification image561being one detected image. This detection area170is a rectangular area if multiple cards are placed side by side.

Thereafter, in step S2, a drawing range172of the detection area170on the image memory20is obtained based on the camera coordinate of detection area170thus obtained.

Then, in step S3, the reference cell finding function164determines whether or not image data of the reference cell62exists in the drawing range172of the detection area170in the image memory20.

If the image data of the reference cell62exists, as shown inFIG. 12Afor example, assuming the case where two cards542and543are arranged in the lateral direction on the side of the card541, the processing proceeds to step S4inFIG. 15, and the identification information detecting function166allows the image data of the region formed by the reference cell62and the corner cells64to be subjected to affine transformation. Then, the identification numbers associated with the cards542and543respectively based on the identification images562and563of the cards542and543are detected. Detection of the identification number is carried out by collating the codes extracted from the identification images562and563with the 2D code database68.

Then, in step S5, the character image searching function168searches the object information table118for character image data120based on each of the identification number and the like of the cards542and543. For example, records respectively associated with the identification numbers are searched out from the object information table118, and if these records thus searched out are “valid”, the image data120is read out from the storage head address corresponding to the current level out of the multiple storage head addresses registered in each of the records.

On the other hand, in step S3, if it is determined that the identification images of the reference cell62does not exist, the reference cell finding function164proceeds to step S6, and outputs on the screen of the monitor30, an error message prompting to place all the cards. Then, after waiting for a predetermined period of time (for example, three seconds) in step S7, the processing returns to step S3and aforementioned processing is repeated.

Then, at the stage where all the character images701to703are determined with respect to all the cards existing in the detection area, the processing operations as described above are completed.

Also in this case, as shown inFIG. 12B, by staring the character appearance display program84, for example, a scene is displayed in which the character images701to703associated with the identification numbers of the cards541to543appear on the identification images561to563of the cards541to543respectively.

In the first card position forecasting program88, recognition of the multiple cards542and543is carried out for the detection area170that is set up based on the position of the identification image561of one card541having already been detected. Therefore, it is not necessary to detect all over the image memory20again in order to recognize the multiple cards542and543, thereby reducing loads in the process to recognize the multiple cards542and543. This will enhance the speed of processing.

Next, the second card position forecasting program90will be explained. As shown inFIG. 3A, the program82is configured assuming a case where two persons respectively place three cards side by side for example, and the two persons initially place the first cards541and544respectively on the left, that is, diagonally.

The second card position forecasting program90forecasts positions of other cards, when two cards are placed diagonally and as shown inFIG. 16, the program90includes, rectangular area setup function174, reference cell finding function176, identification information detecting function178, and character image searching function180.

Here, processing operations of the second card position forecasting program90will be explained with reference toFIG. 18, in particular the operations, after the camera coordinates of the identification images561and564of the two cards541and544placed diagonally on the diagonal line Lm are detected, for example, by the card recognition program82, as shown inFIG. 17A.

In step S101ofFIG. 18, as shown inFIG. 17A, the rectangular area setup function174obtains camera coordinate of the rectangular area182including the diagonal line Lm, based on the camera coordinates of the identification images561and564of the two cards541and544placed on the diagonal line Lm.

Thereafter, in step S102, as shown inFIG. 17B, drawing range184of the rectangular area182on the image memory20is obtained, from the camera coordinate of the rectangular area182thus obtained. Then, in step S103, the reference cell finding function176determines whether or not the image data of the reference cell62exists in the drawing range184of the rectangular area182in the image memory20.

If the image data of the reference cell62exists, as shown inFIG. 3Afor example, assuming the case where two cards542and543are arranged in the lateral direction on the side of the card541and two cards545and546are arranged in the lateral direction on the side of the card544, the processing proceeds to step S104inFIG. 18. Then, the identification information detecting function178allows the image data of the region formed by the reference cell62and the corner cells64to be subjected to affine transformation.

Then, identification numbers associated with the cards542,543,545, and546are detected based on the identification images562,563,565, and566of the remaining cards542,543,545, and546, respectively. Detection of the identification numbers is carried out by collating the codes extracted from the identification images562,563,565, and566with the 2D code database68.

Thereafter, in step S105, the character image searching function180searches the object information table118for character image data120based on each of the identification number and the like of the cards542,543,545, and546. For example, records respectively associated with the identification numbers are searched out, and if these records thus searched out are “valid”, the image data120is read out from the storage head address corresponding to the current level out of the multiple storage head addresses registered in each of the records.

On the other hand, in step S3, if it is determined that the identification images562,563,565and566of the remaining cards542,543,545and546do not exist, the reference cell finding function176proceeds to step S106, and outputs on the screen of the monitor30, an error message prompting to place all the cards542,543,545and546. Then, after waiting for a predetermined period of time (for example, three seconds) in step S107, the processing returns to step S103and the above processing is repeated.

Then, at the stage where all the character images701to706are determined with respect to all the cards541to546existing in the detection area182, the processing operations as described above are completed.

Also in this case, by staring the character appearance display program84, for example, a scene is displayed in which the character images701to706associated with the identification numbers and the like of the cards541to546appear on the identification images561to566of the cards541to546respectively.

In the second card position forecasting program90, recognition of the multiple cards542,543,545, and546is carried out for the detection area182that is formed by the two cards541and544placed on the diagonal line Lm, and after the two cards541and544placed on the diagonal line Lm are recognized, the other cards542,543,545, and546are recognized. Therefore, it is not necessary to detect all over the image memory20again in order to recognize the multiple cards542,543,545, and546, thereby reducing loads in the process to recognize the multiple cards542,543,545, and546.

In the first and the second card position forecasting programs88and90as described above, it is possible to prepare multiple information tables (card placement information table, not illustrated) indicating the placement of the cards541to546, in association with multiple versus-fighters, respectively. Then, every time when the camera coordinates of the identification images561to566of the cards541to546, and the identification numbers of the card541to546are detected, those data may be registered to the associated card placement information table.

In this case, at the stage where all the camera coordinates of the cards541to546placed on the desk, table, or the like52are registered in the card placement information table, subsequently, it is also possible to recognize (re-recognize) the cards541to546based on the camera coordinates registered in the card placement information table.

In the case where the versus-fighters respectively arrange three cards, for example, side by side, it is also possible to decide one processing procedure uniquely for each of the versus-fighters, by placing the three cards in such a manner as changing orientation of each card.

With respect to the three cards541to543, for example, a character to perform action is identified according to the orientation of the card541placed on the left, a counterpart character to be influenced by the action is identified according to the orientation of the card542placed at the center, and the action is identified according to the orientation of the card placed on the right.

In other words, if a character that performs action is specified by the orientation of the card541placed on the left, when the card541is oriented to the left, the character image701associated with the card541is selected. When the card541is oriented upwardly, the character image702associated with the card542at the center is selected. When the card541is oriented to the right, the character image703associated with the card543on the right is selected. When the card541is oriented downwardly, three character images701to703are selected respectively associated with the three cards541to543.

If a counterpart character is specified by the orientation of the card542placed at the center, when the card542is oriented to the left, a character image704associated with the counterpart card544is selected. When the card542is oriented to upwardly, the character image705associated with the counterpart card545at the center is selected. When the card542is oriented to the right, the character image706associated with the counterpart card546on the right is selected. When the card542is oriented downwardly, an image of the character selected by the orientation of own card541placed on the left.

If an action is specified by the orientation of the card543placed on the right, when the card543is oriented to the left, “to attack” is selected, when it is oriented upwardly, “to enchant” is selected, when it is oriented to the right, “to be defensive” is selected, and when it is oriented downwardly, “to protect” is selected.

If the processing is performed based on the placed card orientation as thus described, it is possible to give a command regarding various processing according to a combination of orientations of the multiple cards541to543, without using an operation device (a device to input a command via key operations).

Next, the image motion detecting program92will be explained. This program92obtains a difference between image data picked up at a predetermined timing and image data picked up at a timing different from the predetermined timing, and an image having moved is specified based on the data obtained as a difference. Then, it is determined whether or not thus identified image (such as image of user's hand) comes into contact with a character image, and if there is a contact therebetween, a parameter associated with the character is changed.

As shown inFIG. 19, this program92includes the first memory function188, the second memory function190, difference calculating function192, image specifying function194, contact determining function196, repetitive function198, and parameter changing function200.

Operations of the program92will be explained with reference toFIG. 19toFIG. 21, for example, the operations being subsequent to an action as shown inFIG. 20Ain which the user74holds one card542on one hand202, and the character image702associated with the identification number of the card542appears on the identification image562of the card542in the screen of the monitor30.

Firstly, the first memory function188captures pickup image data204from the CCD camera42based on a timing for inputting the first differential signal S1and stores the data into the first memory for difference24. The timing for inputting the first differential signal S1can be arbitrarily set up.

The second memory function190captures pickup image data206from the CCD camera42based on a timing for inputting the second differential signal S2and stores the data into the second memory for difference26. The timing for inputting the second differential signal S2can also be arbitrarily set up and it may be the timing of one frame, two frames, five frames, or the like, after the first differential signal S1.

As shown inFIG. 21, the difference calculating function192obtains a difference between the pickup image data204stored in the first memory for difference24, and the pickup image data206stored in the second memory for difference26. For example, it is possible to perform a processing such as subtracting the pickup image data204stored in the first memory for difference24from the pickup image206stored in the second memory for difference26.

The image specifying function194specifies an image having moved based on the data obtained as a difference208. The data obtained as a difference208is assumed to be data in a unit of one pixel or some pixels and each dispersed in a shape of island. Here, as to the data obtained as a difference208, a large cluster data including at least100pixels may be extracted, and thus extracted data is identified as an image having moved.

FIG. 21shows an example three images208ato208cindicate images which have moved. Here, the term “specifying” means that a recording range of the data208ato208cthus extracted is detected as a screen coordinates.

The contact determining function196determines as to at least one images208ato208chaving moved, whether or not there exists an image208cthat represents touching the character image702, based on the screen coordinates of the images208ato208chaving moved, respectively, and the character image702drawn in the image memory20.

It is determined that there has been an image touching the character image702, if any part of the screen coordinates of the respective images (208ato208c) having moved and a part of the screen coordinate of the character image702(the screen coordinate on the image memory20) agree with each other. In the example ofFIG. 21, the image208ccorresponds to this image touching the character image.

If an image having moved is only one (for example, image208c), this contact determining function196determines whether or not the image208chaving moved touches the character image702. If a part of the screen coordinate of the image208chaving moved and a part of the screen coordinate of the character image702agree with each other, it is determined that there has been an image touching the character image702. That is, it is also determined that any other image is not touching.

The repetitive function198sequentially repeats the processing of the first memory function188, the processing of the second memory function190, the processing of the difference calculating function192, the processing of the image specifying function194, and the processing of the contact determining function196.

While the repetitive function198repeats the processing of the above described various functions, if it the counts of determination by the contact determining function196becomes a predetermined number of times or more, that the character image705is touched by an image having moved (for example, any arbitrary integer that is 5 or more can be selected) the parameter changing function198increases parameters such as experiential data, physical energy, offensive power, which are registered in the record associated with the identification number of the card542in the object information table118.

In the processing above, the image208cdetermined by the contact determining function196as touching the character image702, out of the images208ato208cspecified by the image specifying function194, is assumed as a pseudo image of user's hand210, and it is also assumed that the user is patting and fondling the character.

Therefore, in addition to a versus-fighting game and the like, it is preferably applied to a breeding game, in which a user breeds a specific character, so as to enhance offensive power, defensive power, and the like.

It is to be noted here that when the user is patting and fondling the character by hand, there may be a case where the image of the user's hand210covers the character image702, and the character image702is hardly seen even if an action that the character is delighted is drawn in the image memory20.

Considering such a situation above, when the pickup image is drawn in the image memory20, Z value of the Z buffering may be set, for example, to the highest value (a value indicating that it is positioned at the furthermost from a camera viewpoint being an origin in the camera coordinate system). Accordingly, even when an image that the user is patting the character by hand is displayed, the character image702is not covered by the image of user's hand210. Therefore, the user is allowed to fondle the character while seeing the character is delighted, thereby giving an amusement to the breeding game, and the like.

In the meantime, according to a progress of the video game, the card put on the desk, table, or the like52may be moved by a hand of user. For example, this happens when at least one of the cards are displaced, the cards are switched in position, replaced by a new card, or the like. If there is a motion in a card as thus described, it is necessary to recognize again the card thus moved.

In order to solve the problem above, the card recognition program82, the first card position forecasting program88, and the second card position forecasting program90may be started every unit of some frames, or dozens of frames. It is a matter of course that when a new card (for example, card541) is recognized, the character appearance display program84is started and a character image701associated with the identification number and the like of the new card541may appear on the identification image561of the new card541. Furthermore, in just a simple case such that a card is displaced from the position or the card positions are switched, the character action display program86is started, and an action of “waiting” is displayed, for example.

In the re-recognition of the card, as described above, the card recognition program82, the first card position forecasting program88, and the second card position forecasting program90may be started every unit of some frames, or dozens of frames. Alternatively, the re-recognition of the card is performed only when a card is moved.

Hereinafter, the method for the above processing, that is, a processing of the card motion detecting program94and card re-recognition program96will be explained.

Firstly, the card motion detecting program94obtains a difference between the image data picked up at a predetermined timing, and image data picked up at a timing different from the predetermined timing, and a card having moved is specified based on the data obtained as a difference.

In other words, the three cards541to543for example, placed on the desk, table, or the like52, are subsequently subjected to the following: as shown inFIG. 22A, one card541, for instance, is moved sideways by the user's hand; as shown inFIG. 22B, one card541is displaced to one direction; as shown inFIG. 22C, the positions of the multiple cards542and543placed together are switched (relocation); as shown inFIG. 23A, for example, one card541is replaced with a new card544; and as shown inFIG. 23B, three cards541to543placed together are superimposed on another. This program94detects such motions as described above and specifies a card having moved.

As shown inFIG. 24, the program94includes, similar to the image motion detecting program92as described above, the first memory function212, the second memory function214, and the difference calculating function216. In addition, the program94further includes the card specifying function218and the repetitive function220.

The first and the second memory functions212and214, and the difference calculating function216perform the same processing as those of the first and the second memory functions188and190and the difference calculating function192of the image motion detecting program92as described above. Therefore, tedious explanation will not be made. The repetitive function220sequentially repeats the processing of the first memory function212, the processing of the second memory function214, the processing of the difference calculating function216, and the card specifying function218.

The card specifying function128performs detection as to a range where a card (for example, card541) is placed, out of the data obtained as difference, and finds out whether or not the identification image561of the card541exists. The identification image561of the card541in this situation includes an identification image (positive image) of the new card541that has appeared by moving, and an identification image (negative image) of the card541that has disappeared by moving. If the identification image561of the card541exists, it is determined there has been a motion as for the card541, and a recording range of the image561of the card541is detected as a screen coordinate.

Specifically, when there is an action that one card541is displaced or moved sideways, the screen coordinate of the image561of one card541is detected. When there is an action that two or more cards (for example cards541to543) are displaced or moved sideways, screen coordinates of respective identification images561to563of at least two cards541to543, which have been displaced or moved sideways, are detected.

As shown in22C, for example, when the card542and the card543are switched, screen coordinates of the respective identification images562and563of the cards542and543thus switched are detected. As shown inFIG. 23A, for example, when the card543is replaced with another card544, screen coordinates of the respective identification images563and564of the cards543and544thus replaced are detected. As shown inFIG. 23B, for example, when three card541to543are superimposed on another, screen coordinate of the identification image561of the uppermost card (for example card541) is detected, out of the three cards541to543thus superimposed.

As thus described, at the stage where a screen coordinate of identification image of a card is detected, the card having moved is specified. The screen coordinate thus detected is supplied to the card re-recognition program96that will be started subsequently.

Next, the card re-recognition program96will be explained. This program96is started when the card specifying function218of the card motion detecting program94specifies a card having moved, and the identification information is recognized again as to the card having moved.

As shown inFIG. 25, the card re-recognition program96includes, similar to the card recognition program82as described above, camera coordinate detecting function222, identification information detecting function224, and character image searching function226. Since the processing in the card re-recognition program96is almost the same as that of the aforementioned card recognition program82, tedious explanation will not be made here.

According to the processing of the card re-recognition program96, a character image associated with the identification number and the like of the card having moved is specified. Therefore, when a new card is recognized, the character appearance display program84is started, and a character image associated with the identification number and the like of the new card appears on the image of the new card.

In just a simple case such that a card is displaced from the position or the card positions are switched, the character action display program86is started, and the character image associated with the identification number and the like of the card is displayed, and simultaneously an action is performed, such as “to attack”, “to enchant”, “to be defensive”, or “to protect” according to how the card is moved.

When multiple cards are superimposed on another, the character image associated with the identification number and the like of the card placed on the top may appear. Alternatively, by storing the identification numbers of at least two cards thus superimposed, a new character associated with a combination of at least two identification numbers may appear. For example, at least two characters associated with the cards thus superimposed may merge into a new character, such as an enormous character. This will be achieved by performing a process such that a record associated with the combination of at least two identification information items is retrieved from a merging information table, not illustrated, and a character image is read out, being associated with the merged item from the head address of the image data registered in the record thus retrieved.

As shown inFIG. 26, when two transparent cards547and548are superimposed on another, a new identification image569is formed, which is a composite image of the identification image567of the transparent card547and the identification image568of the transparent card548. Therefore, the new identification image569is recognized by the card recognition program82, and subsequently, the character appearance display program84is started, for example. Then, on the screen of the monitor30, it is displayed a new character image being associated with the identification number and the like specified by the identification image569appears on this new identification image569.

Next, the character changing program98will be described. This program98is configured such that at the time of level-up, a character image is changed into an image in accordance with the level. As shown inFIG. 27, it includes a level-up function228.

This program98is started every time when writing into the object information table118, in particular, writing into the experiential data is performed by the various application programs80as described above. The level-up function228determines as to the record in which the experiential data is written, whether or not the experiential data stored in the record is beyond a certain value. If it is beyond the value, the level stored in the record is updated to be incremented by one.

As described above, the card recognition program82and the card re-recognition program96retrieve image data associated with the identification number and the level of the card, and the character appearance display program84and the character action display program86retrieve an action data string associated with the identification number and the level of the card. Accordingly, on the identification image on the card, a character image associated with the identification number and the level of the card is displayed in such a manner as being superimposed thereon.

Therefore, as shown inFIG. 28for example, when there are three identification numbers (the first to the third identification numbers601to603), at level 1, the same character (the first character) image711is associated with each of the three identification numbers. However, it is possible that at level 2, the first identification number601is associated with the second character image712, and the second and the third identification numbers602and603are each associated with the third character image713. It is further possible that at level 3, the identification numbers are respectively associated with the images714to716of different characters (the fourth to the sixth characters).

It is possible of course the same character image is associated therewith, from level 1 to level 2 as to one identification number for example. It is possible alternatively that at the stage where the level 1 becomes level 2, the image may be changed to another character image, but being the same character image from level 2 to level 3. In addition, the character image may be the same from level 1 to level 3 as to one identification number.

Accordingly, even if the card is of the same kind from which the same character appears, it is possible to provide a user with imagery that the character is experiencing various evolutions according to the level, thereby enhancing the interest of the card game.

Next, the information table reference program100will be explained. As shown inFIG. 29, this program100is to access memory card38, or network server230, when the memory card38or the network server230manages the object information table118and the like, and it includes table access function232. The network server230is connected to the video game system10relating to the embodiment of the present invention via the network234.

For example, when the object information table118is stored in the hard disk44, a character, a parameter, and the like, being associated with a card, are different by user. Therefore, transfer of the card may be almost meaningless if the contents of the object information table118are different.

Considering the situation above, the information table reference program100enables a transfer of the card. If an accessing destination is the memory card38, the table access function232registers user information for fighting. Then, the use of the object information table118registered in the memory card38is limited only to the user registered in the memory card38. Accordingly, transferring the card is available among the users registered in the memory card38.

If the accessing destination is the network server230, the table access function232registers in the network server230, via a browser, the user information for fighting. Then, using the object information table118managed by the network server230is permitted to the users.

In such a case as described above, the object information table118managed by the network server230becomes information unique to the users registered in the network server230, and this information may be common in all the registered users. Therefore, card transferring among the users registered in the network server230is available.

If the network server230manages the object information table118, it is possible to perform a processing that, as to a record having been set as “invalid”, a character image is newly registered or a parameter is setup, after a lapse of predetermined period of time, by version upgrade of the video game, or the like, so that the record is reconfigured to “valid”.

In such a case as described above, addition of a new character in accordance with a production of a new card, revival of deceased character, and the like are carried out smoothly with an advertisement on a home page, achieving an effective use of the card and the like.

Furthermore, as shown inFIG. 2B, 2D patterns in the cord part60in the identification image56of the card is configured such that multiple identification cells66are arranged. Therefore, the 2D pattern of the card part60may be changed freely, by adding a coated part on the identification cell66with black oil-based pen or water-based pen, or by erasing the identification cell60with a white oil-based pen or water-based pen. This provides an enjoyment such as expecting what type of object image will appear, and what kind of form the image will change into by the level-up, thereby giving a further amusement to the video game into which the card game is merged.

It should be understood that the image display system, the image processing system, and the video game system relating to the present invention are not limited to the disclosed embodiments, but are susceptible of changes and modifications without departing from the scope of the invention.

Claims

  1. An image display system comprising, a computer, an image pickup means which is connected to said computer and picks up an actual image of a card to which an identification image is attached, and a display device which is connected to said computer, wherein, said system further comprising, a pickup image display means which outputs to said display device, image data of a pickup image including said identification image of said card, from said image pickup means, and allows said display device to display said pickup image, a finding means which finds image data of said identification image attached to said card, out of said image data of said pickup image, an identification information detecting means which detects identification information of said card from said image data of said identification image which is found out by said finding means, an object display control means which controls so that a virtual object image associated with said identification information of said card is displayed on said identification image of said card displayed on said display device, in such a manner as being superimposed on said identification image, a difference calculating means which obtains a difference user image by determines calculating differences between first user image data picked up at a predetermined timing and different second user image data picked up at a timing different from said predetermined timing, a contact determining means which determines whether or not said difference user image comes into contact with said virtual object image, based on a screen coordinate of said difference user image and a screen coordinate of said object image displayed on said display device, in such a manner as being superimposed on said identification image of said card, and a parameter changing means which changes a parameter associated with said identification information, when it is determined that said difference user image comes into contact with said virtual object image.
  1. The image display system according to claim 1 , wherein, said, contact determining means determines that, when a screen coordinate of said difference user image and a screen coordinate of said object image displayed on said display device, in such a manner as being superimposed on said identification image of said card are identical, said difference user image comes into contact with said object image.
  2. The image display system according to claim 1 or 2 , wherein, while said system repeatedly conducts a series of procedures consisting of said difference calculating means and said contact determining means to be conducted in order, said parameter changing means changes a parameter associated with said identification information, when a number of times that said difference image comes into contact with said object image is determined to exceed over predetermined number.
  3. The image display system according to claim 1 , wherein, an image specifying means includes the image data of said actual image having moved as image data of a user's hand image.
  4. The image display system according to claim 1 , wherein, multiple identification information items are allocated to an identical virtual object, and parameters regarding said identical virtual object are different in each of said multiple identification information items.
  5. The image display system according to claim 1 , wherein, a storage medium connected to said computer and freely removable is provided, and data associated with said identification information is managed by said storage medium.
  6. The image display system according to claim 1 , wherein, said computer is connected to a network server via a network, and data associated with said identification information is managed by said network server.
  7. The image display system according to claim 1 , further comprising, wherein a Z value of a Z buffering is set to a maximum value indicating a value furthermost from said image pickup means in an image pickup means coordinate system.

Disclaimer: Data collected from the USPTO and may be malformed, incomplete, and/or otherwise inaccurate.