U.S. Pat. No. 10,525,352

GAME PROCESSING METHOD AND RECORDING MEDIUM

AssigneeKOEI TECMO GAMES CO., LTD.

Issue DateDecember 25, 2017

Illustrative Figure

Abstract

A game processing method executed by an information processing device includes switching a position of a virtual camera for generating an image of a first-person view point in a virtual three-dimensional space in a plurality of positions set in advance based on an operation input of a user. A direction of the virtual camera is controlled based on the operation input of the user. The direction of the virtual camera is corrected based on a positional relationship between a predetermined object present in the virtual three-dimensional space and the virtual camera when the position of the virtual camera is switched.

Description

DETAILED DESCRIPTION OF THE EMBODIMENTS An embodiment of the present invention will be described below by referring to the drawings. 1. Entire Configuration of Game System First, an example of entire configuration of a game system1related to this embodiment will be described by usingFIG. 1. As shown inFIG. 1, the game system1has an information processing device3, a controller5, and a head-mount display7. Each of the controller5and the head-mount display7is connected to the information processing device3, capable of communication (capable of transmission/reception of a signal). Note that, although a case of wired connection is shown inFIG. 1, connection may be made wirelessly. The information processing device3is an installed type game machine, for example. However, that is not limiting, and it may be a portable game machine integrally including an input part or the like, for example. Moreover, other than the game machine, it may be those manufactured and sold as a computer such as a server computer, a desktop computer, a laptop computer, a tablet computer and the like, for example, or those manufactured and sold as a telephone such as a mobile phone, a smartphone, a phablet and the like. A user performs various operation inputs by using the controller5(an example of an input device). In the example shown inFIG. 1, the controller5has a cross key9, a plurality of buttons11a-11dand the like, for example. Hereinafter, the button11ais described as an “A button11a”, the button11bas a “B button11b”, the button11cas a “C button11c”, and the button11das a “D button11d” as appropriate, and in the case that the buttons are not discriminated, they are simply described as the “button11”. Note that, instead of or in addition to the above, the controller5may have a joy stick, a touch pad and the like, for example. The head-mount display7is a display device capable of being ...

DETAILED DESCRIPTION OF THE EMBODIMENTS

An embodiment of the present invention will be described below by referring to the drawings.

1. Entire Configuration of Game System

First, an example of entire configuration of a game system1related to this embodiment will be described by usingFIG. 1. As shown inFIG. 1, the game system1has an information processing device3, a controller5, and a head-mount display7. Each of the controller5and the head-mount display7is connected to the information processing device3, capable of communication (capable of transmission/reception of a signal). Note that, although a case of wired connection is shown inFIG. 1, connection may be made wirelessly.

The information processing device3is an installed type game machine, for example. However, that is not limiting, and it may be a portable game machine integrally including an input part or the like, for example. Moreover, other than the game machine, it may be those manufactured and sold as a computer such as a server computer, a desktop computer, a laptop computer, a tablet computer and the like, for example, or those manufactured and sold as a telephone such as a mobile phone, a smartphone, a phablet and the like.

A user performs various operation inputs by using the controller5(an example of an input device). In the example shown inFIG. 1, the controller5has a cross key9, a plurality of buttons11a-11dand the like, for example. Hereinafter, the button11ais described as an “A button11a”, the button11bas a “B button11b”, the button11cas a “C button11c”, and the button11das a “D button11d” as appropriate, and in the case that the buttons are not discriminated, they are simply described as the “button11”. Note that, instead of or in addition to the above, the controller5may have a joy stick, a touch pad and the like, for example.

The head-mount display7is a display device capable of being attached to a head part or a face part of the user. The head-mount display7displays an image (including a still image and a moving image) related to a game generated by the information processing device3. Note that, the head-mount display7may be either of a transparent type or a nontransparent type.

2. Outline Configuration of Head-Mount Display

Subsequently, an example of outline configuration of the head-mount display7will be described by usingFIG. 2. As shown inFIG. 2, the head-mount display7has a display unit13, a direction detector15, and a position detector17.

The display unit13is constituted by a liquid crystal display, an organic EL display or the like, for example, and displays an image related to the game. The direction detector15is constituted by an acceleration sensor, a gyro sensor and the like, for example, and detects a direction of the head part (direction of the face) of the user. The position detector17is constituted by a camera installed outside of the head-mount display7and marks such as a light emission part installed on the head-mount display7and the like, for example, and detects a position of the head part of the user. Note that, each of the detectors may have configuration other than the above.

The information processing device3changes the image to be displayed on the display unit13on the basis of detection results of the direction detector15and the position detector17in accordance with the direction or the position of the head part of the user and expresses a realistic virtual reality (hereinafter referred to also as a “VR” (virtual reality) as appropriate). Note that, the device may be made to also detect the position of the controller5so that the image to be displayed on the display unit13is changed in accordance with movement of the head part of the user and an operation of the controller5.

Note that, a configuration form of the head-mount display7is not limited to the above. Although explanation was omitted in the above, the head-mount display7may have an earphone or a headphone mounted, for example.

3. Outline Contents of Game

Subsequently, the game related to this embodiment, that is, an example of outline contents of a game provided by execution of a game program and a game processing method of the present invention by the information processing device3will be described by usingFIGS. 3 and 4.

The game related to this embodiment is played such that the user can observe an object arranged in a virtual space from a desired direction and distance, for example, by generating a first-person view point image in a virtual three-dimensional space in accordance with movement of the head part by the user and an operation input of the controller5. A type of the object is not particularly limited but includes a male character, a female character of human beings, an animal character other than the human beings, a virtual living beings character other than the human beings or animals or an object other than living beings, for example. In this embodiment, as shown inFIG. 3, a case that an object is a female character19(an example of a first object) is explained as an example. Note that, in the three-dimensional virtual space, an XYZ coordinate system with a horizontal plane as an XZ plane and a vertical axis as a Y-axis is set.

FIG. 4shows an example of a flow of the game executed by the information processing device3.

At Step S5, the information processing device3has a title screen of the game displayed on the display unit13of the head-mount display7.

At Step S10, the information processing device3determines whether or not there is an instruction input to start the game by the user. The instruction input to start the game is selection of a menu in a VR mode on the title screen or the like, for example. In the case that there is no instruction input to start the game (Step S10: NO), the routine returns to Step S5and continues display of the title screen. On the other hand, in the case that there is the instruction input to start the game (Step S10: YES), the routine proceeds to the subsequent Step S15.

At Step S15, the information processing device3has attachment guide displayed on the display unit13of the head-mount display7in order to explain how to attach the head-mount display7and the like to the user. In the case that the head-mount display7is not connected to the information processing device3, for example, display for prompting connection is made.

At Step S20, the information processing device3determines whether or not the play by the VR in the game is for the first time by referring to a history of save data or the like, for example. In the case that this is the first time (Step S20: YES), the routine proceeds to the subsequent Step S25. On the other hand, in the case that it is not the first time (Step S20: NO), the routine proceeds to Step S30which will be described later.

At Step S25, the information processing device3has a tutorial for explaining how to play by the VR in the game displayed on the display unit13of the head-mount display7.

At Step S30, the information processing device3has a selection screen of a viewing mode displayed on the display unit13of the head-mount display7, and determines whether or not the viewing mode is selected by the user. The viewing mode is for the user to set what behavior of the female character19is to be viewed, and in this embodiment, for example, an “event mode”, a “gravure mode”, a “photo mode” and the like are prepared. The “event mode” is a mode in which events such as opening, activities, changing clothes and the like viewed once in the game can be freely viewed. The “gravure mode” is a mode in which a gravure viewed once in the game can be freely viewed. The “photo mode” is a mode in which the female character19is made to take desired posing and can be freely photographed.

In the case that the selection of the viewing mode is cancelled (Step S30: NO), the routine proceeds to the subsequent Step S35, and the information processing device3has an end guide such as an instruction to remove the head-mount display7or the like displayed on the display unit13of the head-mount display7. After that, the routine returns to the previous Step S5, and the title screen is displayed. On the other hand, in the case that the viewing mode is selected (Step S30: YES), the routine proceeds to the subsequent Step S40.

At Step S40, the info illation processing device3has a selection screen of a character displayed on the display unit13of the head-mount display7, and determines whether or not the character to be viewed is selected by the user. In the case that the selection of the character is cancelled (Step S40: NO), the routine returns to Step S30, and the selection screen of the viewing mode is displayed. On the other hand, in the case that the character is selected (Step S40: YES), the routine proceeds to the subsequent Step S45.

At Step S45, the information processing device3has various setting screens displayed on the display unit13of the head-mount display7, and determines whether or not various settings are made by the user. The various settings include scenes to be viewed (place, time (morning, daytime, evening, night) and the like), poses, clothes (swimming wear, costume and the like), skin state (tanned, wet degrees and the like) and the like, for example, but may include those other than them. In the case that the setting is cancelled (Step S45: NO), the routine returns to Step S40, and the selection screen of the character is displayed. On the other hand, in the case that various settings are made (Step S45: YES), the routine proceeds to the subsequent Step S50.

At Step S50, the information processing device3reproduces and displays an image (including a still image and a moving image) of the female character19selected in the selected viewing mode on the display unit13of the head-mount display7in accordance with the various conditions set as above. In the case that the reproduction is finished, the routine returns to Step S45, and the various setting screens are displayed.

Note that, the processing at all the Steps described above may be executed in the “VR mode” in which the image to be displayed on the display unit13is changed in accordance with movement of the head part of the user, or may be executed in a “normal mode” in which only Steps in a part of Step S25and Step S50and the like are executed in the VR mode and for the other Steps, movement of the head part of the user is not reflected in the image on the display unit13, for example.

The contents of feature portions of the game related to this embodiment will be described below in detail. Note that, the processing contents described below are executed during reproduction in the viewing mode at Step S50, and are applied in any of the viewing modes, that is, the “event mode”, the “gravure mode”, and the “photo mode”.

4. Functional Configuration of Information Processing Device

Subsequently, by using the aforementionedFIG. 2andFIGS. 5-19, an example of functional configuration mainly related to reproduction in the viewing mode in the functional configuration of the information processing device3will be described.

As shown inFIG. 2, the information processing device3has a camera-position control processing part21, a camera-direction control processing part23, a camera-position switching processing part25, a camera-direction correction processing part27, a first touch determination processing part29, a first object deformation processing part31, an object-position control processing part33, a second touch determination processing part35, a vibration generation processing part37, an airflow generation processing part39, a second object deformation processing part41, a sight line direction detection processing part43, and an object behavior control processing part45.

The camera-position control processing part21controls a position of a virtual camera for generating an image of a first-person view point in the virtual three-dimensional space on the basis of the operation input of the user. In this embodiment, the position of the virtual camera is controlled on the basis of the position of the head part of the user detected by the position detector17of the head-mount display7. As a result, the user can move the position of the virtual camera (position of the view point) in the virtual space to a desired position in accordance with the movement position of the head part.

The camera-direction control processing part23controls the direction of the virtual camera on the basis of the operation input of the user. In this embodiment, the direction of the virtual camera is controlled on the basis of the direction of the head part of the user detected by the direction detector15of the head-mount display7. As a result, the user can change the direction of the virtual camera (the sight line direction) in the virtual space to a desired direction in accordance with the movement direction of the head part.

The camera-position switching processing part25switches the position of the virtual camera in a plurality of positions set in advance on the basis of the operation input of the user. In this embodiment, a plurality of positions at which the virtual cameras can be arranged so as to surround the periphery of the female character19is set in advance, and the user can switch the position of the virtual camera to a desired position by the operation input through the controller5. Note that, the camera-position control processing part21controls the position of a virtual camera47on the basis of the position of the head part detected by the position detector17within a predetermined range (within a range with a radius of several meters, for example) around a position switched by the camera-position switching processing part25.

FIG. 5shows an example of a switching position of the virtual camera47. In the example shown inFIG. 5, four spots surrounding the periphery of the female character19are set as switching positions, and each of the switching positions is assigned to each of the buttons11a-11dof the controller5. For example, in the case that the A button11ais pressed, the position of the virtual camera47is switched to a front position of the female character19. In the case that the B button11bis pressed, the position of the virtual camera47is switched to a right position of the female character19. In the case that the C button11cis pressed, the position of the virtual camera47is switched to a rear position of the female character19. In the case that the D button11dis pressed, the position of the virtual camera47is switched to a left position of the female character19.

FIG. 6shows an example of a display screen in the case that the A button11ais pressed. The front position of the virtual camera47is set with a separation distance from the female character19longer than the other positions, and as shown inFIG. 6, in the case of being switched to the front position, the entire body of the female character19is displayed. In this example, a substantially center position of the body of the female character19is displayed substantially at the center position of the screen. Note that, the other switching positions of the virtual camera47are shown for explanation inFIG. 6(actually, they are not shown).

FIG. 7shows an example of the display screen in the case that the B button11bis pressed. The right position of the virtual camera47is set with a separation distance from the female character19shorter than the other positions, and as shown inFIG. 7, in the case of being switched to the right position, substantially the upper body of the female character19is displayed. In this example, a substantially center position of the upper body of the female character19is displayed substantially at the center position of the screen.

FIG. 8shows an example of the display screen in the case that the C button11cis pressed. The rear position of the virtual camera47is set with a separation distance from the female character19middle, and as shown inFIG. 8, in the case of being switched to the rear position, the portion substantially above the knee of the body of the female character19is displayed. In this example, a substantially center position of the portion substantially above the knee of the body of the female character19is displayed substantially at the center position of the screen.

Note that, the display screen in the case that the D button11dis pressed, is not shown.

Note that, each of the display screens is a display of the case that there is no change in the position and the direction of the virtual camera47by movement of the head part of the user, that is, the display in the case that the virtual camera47is located at the position initially set (position immediately after switching) and is directed to the direction initially set. Therefore, the user can adjust the position and the direction of the virtual camera (view point) to a desired position and direction from the state of each of the display screens by moving the head part. As a result, after coarse adjustment of the direction of watching the female character19by switching the position of the virtual camera47, the user can make fine adjustment by moving the head part by moving the body.

Further, the switching position is an example, and the number, positions, and directions of the switching position, a distance from the female character19and the like may be set to those other than the above. Furthermore, the number of characters is also not limited to one, and the switching positions may be set so as to surround a plurality of characters.

Returning toFIG. 2, the camera-direction correction processing part27corrects the direction of the virtual camera47controlled by the camera-direction control processing part23on the basis of a positional relationship between the female character19and the virtual camera47in the virtual three-dimensional space when the position of the virtual camera47is switched by the camera-position switching processing part25. Specifically, the camera-direction correction processing part27corrects the direction of the virtual camera47so that the direction of the virtual camera47when seen from the vertical direction (Y-axis direction) becomes a direction with the female character19on a front. That is, the camera-direction correction processing part27corrects only a rotating angle around the vertical axis (Y-axis) without considering an elevation/depression angle (i.e., a pitch angle—an elevation angle and a depression angle to the XZ plane) in the virtual space.

FIGS. 9-12show a specific example of correction by the camera-direction correction processing part27. As shown inFIG. 9, for example, in a state where the virtual camera47is located on the right of the female character19and the user is changing the direction of the virtual camera47closer to the front of the female character19, as shown inFIG. 10, in the case that the virtual camera47is switched to the rear position, for example, correction is made only by an angle θ1in the rotating direction around the Y-axis so that the direction of the virtual camera47substantially matches the direction with the female character19on the front, that is, a center line CL of the body of the female character19substantially matches a center line CLo of the display screen in this example. As a result, in the case that the virtual camera47is switched to the rear position, the user enters a state seeing the rear part of the female character19from right behind.

Moreover, as shown inFIG. 11, for example, in a state where the virtual camera47is located on the right of the female character19and the user is changing the direction of the virtual camera47closer to above and to the front of the female character19, as shown inFIG. 12, in the case that the virtual camera47is switched to the rear position, for example, the direction of the virtual camera47is not changed for the elevation angle to the horizontal plane but is corrected only by an angle θ2only in the rotating direction around the Y-axis so that the center line CL of the body of the female character19substantially matches the center line CLo of the display screen. As a result, in the case that the virtual camera47is switched to the rear position, the user enters a state looking up above the rear part of the female character19from right behind.

As described above, by correcting only the rotating angle around the vertical axis without considering the elevation/depression angle in the virtual space, unnatural display that the female character19is located on the front by switching the camera position although the virtual camera47is directed to an upper direction or a lower direction with respect to the female character19, can be prevented, and lowering of reality in the virtual space can be prevented. Particularly, in the case of the VR-mode compliant game as in this embodiment, so-called VR sickness of the user caused by wrong positional relationship between the front direction of the user and an upper part (the sky or the like, for example) or a lower part (the ground or the like, for example) in the virtual space can be suppressed.

Note that, after correction is made by the camera-direction correction processing part27as above, the direction of the virtual camera47by the aforementioned camera-direction control processing part23is controlled on the basis of the direction after the correction. In the example shown inFIG. 9, for example, the user rotates the head part in the left direction in the rotating direction around the user before switching of the camera position, but since correction is made so that the female character19is located on the front by switching of the camera position, in the case that the user rotates the head part in the right direction to return it to the original direction after the switching, for example, the female character19moves to the left side from the center line CLo of the display screen.

Note that, it may be so configured that the user can initialize correction by the camera-direction correction processing part27by carrying out a predetermined operation input by using the controller5or the head-mount display7.

Returning toFIG. 2, the first touch determination processing part29determines presence or absence of a touch between the virtual camera47and the female character19present in the virtual three-dimensional space. Note that, the “touch” here includes not only a case that a distance between the virtual camera47and the female character19becomes 0 but also a case that the distance becomes a predetermined value or less in a separated state. Then, the first object deformation processing part31generates deformation in a touched portion of the female character19in the case that it is determined by the first touch determination processing part29that there was a touch. Note that, the “deformation” here includes not only static deformation such as a recess or dent but also dynamic deformation such as swing or vibration.

In this embodiment, a plurality of control points (not shown) for detecting a collision with another object is set on a surface of the virtual camera47and a surface of the skin of the body of the female character19. The first touch determination processing part29determines presence or absence of a touch between the virtual camera47and the female character19on the basis of a detection result of the control point. The control point may be provided on the entire body of the female character19or may be provided only on specific portions such as the chest (breast), the buttocks, the thighs, the abdomen, the upper arm and the like. Similarly, a portion deformable by the first object deformation processing part31may be the entire body of the female character19or may be only the specific portion. Note that, a deforming method by the first object deformation processing part31is not particularly limited, and for example, one or a plurality of reference points is set for each predetermined portion or deformation may be made by moving the plurality of control points constituting the skin surface only by a predetermined direction and amount with respect to the reference points.

FIGS. 13 and 14show an example of deformation of the female character19by a touch with the virtual camera47. As described above, at each switching position of the virtual camera47, the user can move the virtual camera47closer to or away from the female character19by moving the head part. In the example shown inFIG. 13, in a state where the virtual camera47is located in front of the female character19, the user moves the head part so as to move the virtual camera47toward a femoral part49of the female character19. Then, in the case that a distance between the virtual camera47and the surface of the femoral part49becomes a predetermined value or less, it is determined to be touched, and as shown inFIG. 14, a dented portion51caused by the touch is generated on the femoral part49. As a result, it becomes possible for the user to feel that the body of the user (the face or the head part, for example) touches the femoral part49of the female character19, whereby presence of the female character49can be made more realistic.

Returning toFIG. 2, the object-position control processing part33controls the position of a touch object53(an example of a second object. SeeFIGS. 15 and 16which will be described later) in the virtual three-dimensional space on the basis of the operation input of the user. The touch object53is an object prepared for the user to make a pseudo touch to the female character19. In this embodiment, the position of the touch object53is controlled on the basis of the operation input through the controller5, for example. Note that, it may be so configured that a sight detection function is provided in the head-mount display7so that the position of the touch object53can be controlled by the sight line direction, for example. As a result, the user can move the touch object53to a desired position in the virtual space.

The second touch determination processing part35determines presence or absence of a touch between the touch object53and the female character19. The “touch” here includes not only a case that a distance between the touch object53and the female character19becomes 0 but also a case that the distance becomes a predetermined value or less in a separated state. Then, the first object deformation processing part31generates deformation in a touched portion of the female character19in the case that it is determined by the second touch determination processing part35that there was a touch. Note that, the “deformation” here includes not only static deformation such as a recess or a dent but also dynamic deformation such as swing or vibration.

FIGS. 15 and 16show an example of deformation of the female character19by a touch with the touch object53. As described above, the user can freely move the touch object53by using the controller5and the like. In the example shown inFIG. 15, in a state where the virtual camera47is located in front of the female character19, the user moves the touch object53toward an abdomen part55of the female character19. Then, in the case that the distance between the touch object53and the surface of the abdomen part55becomes a predetermined value or less, it is determined to be touched, and as shown inFIG. 16, a recess portion57caused by the touch is generated on the abdomen part55. As a result, it becomes possible for the user to feel that the body of the user (the hand or the leg, for example) touches the abdomen part55of the female character19, whereby presence of the female character19can be made more realistic. Note that, a shape of the touch object53shown inFIGS. 15 and 16is an example and it may be another shape such as an object of a hand, a controller or the like, for example.

Returning toFIG. 2, the vibration generation processing part37generates vibration in the controller5at least either one of the case that it is determined by the first touch determination processing part29that there was a touch between the virtual camera47and the female character19or the case that it is determined by the second touch determination processing part35that there was a touch between the touch object53and the female character19. As a result, the user can feel the touch with the female character19more realistically, whereby reality of the virtual space can be improved.

The airflow generation processing part39generates virtual airflow, in the case that the controller5is moved in a predetermined direction, in a direction along the predetermined direction in the virtual three-dimensional space. Further, the second object deformation processing part41generates deformation by an air pressure of the airflow on at least either one of a predetermined portion of the female character19and an attached object attached to the female character19. Note that, the “deformation” here includes not only static deformation such as recess or dent but also dynamic deformation such as swing or vibration or moreover, tearing, breakage, disassembly and the like.

In this embodiment, an acceleration sensor, a gyro sensor or the like (not shown) is provided in the controller5, for example, and a three-dimensional operation direction or acceleration of the controller5is detected. The airflow generation processing part39generates a virtual airflow with an air pressure (or an air speed) in a three-dimensional direction along the operation direction of the controller5and according to the acceleration of the operation on the basis of a detection result of the sensor.

FIGS. 17-19show an example of generation of the virtual airflow and deformation of the object by the airflow. As shown inFIG. 17, in a state where the virtual camera47is located on the rear of the female character19, for example, in the case that the controller5is operated to a substantially upper left direction, for example (it may be once or several times) as shown inFIG. 18, the virtual airflow in the substantially upper left direction is generated in the virtual space as shown inFIG. 19. In the example shown inFIG. 19, a hair part59of the female character19flows and a swimming wear61(an example of the attached object) of the female character19is turned up by the generated airflow.

Note that, a portion of the body other than the hair part59of the female character19such as the chest (breast), the buttocks, the thighs, the abdomen, the upper arm and the like may be deformed by the airflow. Further, the attached object may be clothes other than the swimming wear, personal belongings, an accessory or the like. Furthermore, a deformation amount of the hair part59or the swimming wear61may be increased/decreased in accordance with the speed or acceleration of the controller5or the number of swing times or the like. Moreover, tearing, breakage, disassembly or the like may be generated in the attached object in the case that the deformation amount exceeds a predetermined amount, for example.

Returning toFIG. 2, the sight line direction detection processing part43detects what portion of the female character19the line of sight of the user is directed on the basis of at least either one of the direction of the virtual camera47and the position of the virtual camera47. Then, the object behavior control processing part45controls a behavior of the female character19on the basis of the portion detected by the sight line direction detection processing part43. Moreover, the object behavior control processing part45controls the behavior of the female character19on the basis of the touched portion of the female character19in at least in either one of the case that it is determined by the first touch determination processing part29that there was a touch between the virtual camera47and the female character19and the case that it is determined by the second touch determination processing part35that there was a touch between the touch object53and the female character19. The “behavior” here is movement, voice (words, tone), expressions or the like of the female character19, for example.

Note that, in addition to or instead of the behavior of the female character19, on the basis of the portion of the sight line direction or the touched portion, progress itself of the game may be controlled, for example, by finishing of the viewing mode or the like.

As a result, in the case that the user gazes a specific portion (breast or buttocks) of the female character19or that the user performs an action of touching the specific portion repeatedly, for example, the female character19can be made to be angry or feel offended. Moreover, by stroking the hair part59of the female character19or by praising the taste of fashion by gazing the hair part59or the swimming wear61, for example, the female character19can be made happy to the contrary. Since emotions of the female character19can be expressed as above, presence of the female character19can be made more realistic.

Note that, processing or the like in each processing part described above is not limited to the example of this processing sharing and may be executed by an even smaller number of processing parts (one processing part, for example), for example, or may be executed by further segmented processing parts. Further, the functions of each processing part described above are implemented by a game program executed by a CPU101(seeFIG. 21which will be described later) which will be described later, but a part of them may be implemented by an actual device such as a dedicated integrated circuit such as ASIC, FPGA and the like or other electric circuits and the like, for example.

5. Processing Procedure Executed by Information Processing Device

Subsequently, by usingFIG. 20, an example of a processing procedure executed by the CPU101in the information processing device3will be described.

At Step S105, the information processing device3controls the position and the direction of the virtual camera47by the camera-position control processing part21and the camera-direction control processing part23on the basis of the position and the direction of the head part of the user detected by the position detector17and the direction detector15of the head-mount display7.

At Step S110, the information processing device3controls the position of the touch object53in the virtual three-dimensional space by the object-position control processing part33on the basis of the operation input through the controller5.

At Step S115, the information processing device3determines whether or not a switching operation of the position of the virtual camera47is performed by the camera-position switching processing part25. In the case that the switching operation of the camera position is performed by pressing on any one of the buttons11of the controller5, for example (Step S115: YES), the routine proceeds to the subsequent Step S120. On the other hand, in the case that the switching operation of the camera position is not performed (Step S115: NO), the routine proceeds to Step S130which will be described later.

At Step S120, the information processing device3switches the position of the virtual camera47in a plurality of positions set in advance by the camera-position switching processing part25in accordance with the switching operation of the user.

At Step S125, the information processing device3corrects the direction of the virtual camera47by the camera-direction correction processing part27so that the direction of the virtual camera47when seen from the vertical direction is made the direction with the female character19on the front at the position switched at Step S120.

At Step S130, the information processing device3determines presence or absence of a touch between the virtual camera47and the female character19and presence or absence of a touch between the touch object53and the female character19by the first touch determination processing part29and the second touch determination processing part35. In the case that it is determined that there is either one of the touches (Step S130: YES), the routine proceeds to the subsequent Step S135. On the other hand, in the case that it is determined that there is none of the touches (Step S130: NO), the routine proceeds to Step S150which will be described later.

At Step S135, the information processing device3generates deformation on a touched portion of the female character19by the first object deformation processing part31.

At Step S140, the information processing device3generates vibration in the controller5by the vibration generation processing part37.

At Step S145, the information processing device3determines the touch of the virtual camera47with the female character19or the touch of the touch object53with the female character19satisfies a predetermined condition. The predetermined condition may be whether or not a touched portion is a specific portion (breast, buttocks or the like), the number of touch times within predetermined period is a predetermined number of times or more, or alternatively a continuous touch period is a predetermined period or more, or the like, for example. In the case that the touch with the female character19satisfies the predetermined condition (Step S145: YES), the routine proceeds to Step S160which will be described later. On the other hand, in the case that the touch with the female character19does not satisfy the predetermined condition (Step S145: NO), the routine proceeds to the subsequent Step S150.

At Step S150, the information processing device3detects to what portion of the female character19the line of sight of the user is directed by the sight line direction detection processing part43on the basis of at least either one of the direction of the virtual camera47and the position of the virtual camera47.

At Step S155, the information processing device3determines whether or not the sight line direction (portion) detected at Step S150satisfies a predetermined condition. The predetermined condition may be whether or not a portion to which the line of sight is directed is a specific portion (breast, buttocks or the like), the number of times gazing the specific portion within a predetermined period is a predetermined number of times or more, a period gazing the specific portion is a predetermined period or more, or the like, for example. In the case that the sight line direction satisfies the predetermined condition (Step S155: YES), the routine proceeds to the subsequent Step S160. On the other hand, in the case that the sight line direction does not satisfy the predetermined condition (Step S155: NO), the routine proceeds to the subsequent Step S165.

At Step S160, the information processing device3controls the behavior of the female character19by the object behavior control processing part45such as a behavior that the female character19feels offended, for example.

At Step S165, the information processing device3determines whether or not the controller5has been moved at a predetermined speed or acceleration or more by the airflow generation processing part39. In the case that the controller5has been moved (Step S165: YES), the routine proceeds to the subsequent Step S170. On the other hand, in the case that the controller5has not been moved (Step S165: NO), the routine proceeds to Step S175which will be described later.

At Step S170, the information processing device3generates the virtual airflow in the direction along the operation direction of the controller5in the virtual three-dimensional space by the airflow generation processing part39. Then, by the second object deformation processing part41, deformation by the air pressure of the airflow is generated at least in either one of a predetermined portion of the female character19and the swimming wear61that the female character19wears.

At Step S175, the information processing device3determines whether or not a predetermined operation input to finish the game has been made. The operation input to finish the game is an operation to cancel the selection of the viewing mode described inFIG. 3or the like, for example. In the case that there is no operation input to finish the game (Step S175: NO), the routine returns to the previous Step S105and repeats the similar procedure. On the other hand, in the case that there is the operation input to finish the game (Step S175: YES), this flow is finished.

Note that, the processing procedure described above is an example, and at least a part of the procedure may be deleted or changed, or a procedure other than the above may be added. Moreover, the order of at least a part of the procedures may be changed or a plurality of procedures may be integrated into a single procedure.

6. Hardware Configuration of the Information Processing Device

A hardware configuration will be described for the information processing device3achieving the processing parts implemented by a program executed by the CPU101described above, with reference toFIG. 21.

As shown inFIG. 21, the information processing device3has, for example, a CPU101, a ROM103, a RAM105, a GPU106, a dedicated integrated circuit107constructed for specific use such as an ASIC or an FPGA, an input device113, an output device115, a storage device117, a drive119, a connection port121, and a communication device123. These constituent elements are mutually connected via a bus109and an input/output (I/O) interface111such that signals can be transferred.

The game program can be recorded in a ROM103, the RAM105, and the storage device117, for example.

The game program can also temporarily or permanently (non-transitory) be recorded in a removable recording medium125such as magnetic disks including flexible disks, various optical disks including CDs, MO disks, and DVDs, and semiconductor memories. The recording medium125as described above can be provided as so-called packaged software. In this case, the game program recorded in the recording medium125may be read by the drive119and recorded in the storage device117through the I/O interface111, the bus109, etc.

The game program may be recorded in, for example, a download site, another computer, or another recording medium (not shown). In this case, the game program is transferred through a network NW such as a LAN and the Internet and the communication device123receives this program. The program received by the communication device123may be recorded in the storage device117through the I/O interface111, the bus109, etc.

The game program may be recorded in appropriate external connection device127, for example. In this case, the game program may be transferred through the appropriate connection port121and recorded in the storage device117through the I/O interface111, the bus109, etc.

The CPU101executes various process in accordance with the program recorded in the storage device117to implement the processes of the camera-direction correction processing part27, the first touch determination processing part29, etc. In this case, the CPU101may directly read and execute the program from the storage device117or may be execute the program once loaded in the RAM105. In the case that the CPU101receives the program through, for example, the communication device123, the drive119, or the connection port121, the CPU101may directly execute the received program without recording in the storage device117.

The CPU101may execute various processes based on a signal or information input from the input device113such as the controller5described above, a mouse, a keyboard, and a microphone as needed.

The GPU106executes processes for displaying images such as a rendering processing based on a command of the CPU101.

The CPU101and the GPU106may output a result of execution of the process from the output device115such as the display unit13of the head-mount display7and a sound output device (not shown), for example. And the CPU101and the GPU106may transmit this process result to the communication device123or the connection port121as needed or may record the process result into the storage device117or the recording medium125.

7. Effect of Embodiment

The game program of this embodiment has the information processing device3function as the camera-position switching processing part25for switching the position of the virtual camera47for generating an image of a first-person view point in the virtual three-dimensional space on the basis of the operation input of the user in a plurality of positions set in advance, the camera-direction control processing part23for controlling the direction of the virtual camera47on the basis of the operation input of the user, and the camera-direction correction processing part27for correcting the direction of the virtual camera47controlled by the camera-direction control processing part23on the basis of a positional relationship between the female character19present in the virtual three-dimensional space and the virtual camera47when the position of the virtual camera47is switched by the camera-position switching processing part25.

In this embodiment, the user can switch the position of the virtual camera47(position of the view point) for generating the image of the first-person view point to a desired position in a plurality of positions set in advance by the operation input and also can control the direction of the virtual camera47(direction of sight) to a desired direction. As a result, the user can watch the female character19present in the virtual space from various directions. At this time, when the position of the virtual camera47is switched, the direction of the virtual camera47is corrected on the basis of the positional relationship between the female character19and the virtual camera47. As a result, when the user switches the position of the virtual camera47(when the position of the view point is switched), the sight line direction can be corrected so that the female character19comes to the front at all times and thus, such a situation that the user loses a direction and loses the line of sight of the female character19can be prevented. Moreover, since a feeling of being with the female character19at all times in the virtual space can be made, the presence of the female character19can be made more realistic. Therefore, reality of the virtual space can be improved.

Moreover, particularly in this embodiment, the camera-direction correction processing part27corrects the direction of the virtual camera47so that the direction of the virtual camera47when seen from the vertical direction becomes a direction with the female character19on the front in the virtual three-dimensional space.

As a result, when the direction of the virtual camera47is to be corrected, only the rotating angle around the vertical axis can be made a correction target without considering an elevation/depression angle with respect to the horizontal plane in the virtual space. As a result, unnatural display that the female character19is located on the front by switching the camera position even though the virtual camera47is directed to the upper direction or the lower direction with respect to the female character19can be prevented, and lowering of reality in the virtual space can be prevented. Particularly, in the case of the VR (virtual reality) compliant game as in this embodiment, so-called VR sickness of the user caused by wrong positional relationship between the front direction of the user and the upper part (the sky or the like, for example) and the lower part (the ground or the like, for example) in the virtual space can be suppressed.

Moreover, particularly in this embodiment, the information processing device3is configured to conduct transmission/reception of a signal with the display unit13constituted capable of being attached to the head part of the user and displaying an image and the direction detector15for detecting the direction of the head part, and the camera-direction control processing part23controls the direction of the virtual camera47on the basis of the direction of the head part detected by the direction detector15.

As a result, since the user can change the direction of the virtual camera47(the sight line direction) in the virtual space in accordance with the direction of the head part, the user can experience so-called virtual reality. Therefore, a feeling of immersion in the game can be drastically improved. Moreover, a loss of sense of direction at switching of the view point often occurring when the user is experiencing the virtual reality can be prevented.

Moreover, particularly in this embodiment, the game program has the information processing device3conducting transmission/reception of a signal with the position detector17for detecting the position of the head part function as the camera-position control processing part21for controlling the position of the virtual camera47on the basis of the position of the head part detected by the position detector17within a predetermined range around a position switched by the camera-position switching processing part25.

As a result, after coarse adjustment of the direction of watching the female character19by switching the position of the virtual camera47(switching the view point position), the user can make fine adjustment by moving the head part by moving the body. Therefore, since the user can finely adjust the direction to watch the female character19, the presence of the female character19is made more realistic, whereby reality of the virtual space can be further improved.

Moreover, particularly in this embodiment, the game program has the information processing device3function as the sight line direction detection processing part43for detecting to what portion of the female character19the line of sight of the user is directed on the basis of at least either one of the direction of the virtual camera47, and the position of the virtual camera47and the object behavior control processing part45for controlling the behavior of the female character19on the basis of the portion detected by the sight line direction detection processing part43.

As a result, in the case that the female character19is a character having emotions, for example, the character can be made offended or happy to the contrary by gazing by the user of a specific portion of the character. Therefore, attractiveness of the game can be improved.

Moreover, according to this embodiment, other than the effects described above, the following effects can be additionally obtained.

The game program of this embodiment has the information processing device3function as the camera-position control processing part21for controlling the position of the virtual camera47for generating an image of the first-person view point in the virtual three-dimensional space on the basis of the operation input of the user, the first touch determination processing part29for determining presence or absence of touch between the virtual camera47and the female character19present in the virtual three-dimensional space, and the first object deformation processing part31for generating deformation on a touched portion of the female character19in the case that it is determined by the first touch determination processing part29that there was a touch.

In this embodiment, the user can arrange the virtual camera47for generating an image of the first-person view point at a desired position in the virtual space by the operation input. At this time, in the case that the virtual camera47touches the female character19, the touched portion of the female character19is deformed. As a result, the touch by the body of the user (face or the head part, for example) with the female character19can be expressed and thus, as compared with only watching the female character19, the presence of the female character19can be made more realistic. Therefore, reality of the virtual space can be improved.

Moreover, particularly in this embodiment, the game program has the information processing device3further function as the object-position control processing part33for controlling the position of the touch object53in the virtual three-dimensional space on the basis of the operation input of the user, and the second touch determination processing part35for determining presence or absence of a touch between the touch object53and the female character19, and the first object deformation processing part31generates deformation in the touched portion of the female charter19in the case that it is determined by the second touch determination processing part35that there was a touch.

In this embodiment, the user can position the touch object53at a desired position in the virtual three-dimensional space by the operation input. At this time, in the case that the touch object53touches the female character19, the touched portion of the female character19is deformed. As a result, since the touch by the body of the user (the hand, the leg or the like, for example) with the female character19can be expressed, the presence of the female character19can be made more realistic. Therefore, reality of the virtual space can be further improved.

Moreover, particularly in this embodiment, the game program has the information processing device3for conducting transmission/reception of a signal with the controller5further function as the vibration generation processing part37for generating vibration in the controller5in at least either one of the case that it is determined by the first touch determination processing part29that there was a touch and the case that it is determined by the second touch determination processing part35that there was a touch.

As a result, the user can experience the touch with the female character19more realistically, whereby reality of the virtual space can be further improved.

Moreover, particularly in this embodiment, the game program has the information processing device3function as the airflow generation processing part39for generating the virtual airflow in the direction along the predetermined direction in the virtual three-dimensional space in the case that the controller5is moved to the predetermined direction, and the second object deformation processing part41for generating deformation by the air pressure of the airflow in at least either one of the predetermined portion of the female character19and the swimming wear61that the female character19wears.

As a result, by moving the controller5to the predetermined direction, the user can generate the airflow along the operation direction in the virtual space. Moreover, since the deformation according to the direction or strength of the airflow can be generated in the female character19or the swimming wear61, the hair part59or the swimming wear61of the female character19can be made to swing by the airflow, or the like, for example. As described above, since the user can make indirect touch with the female character19through the air other than the direct touch, reality of the virtual space can be further improved, and also attractiveness of the game can be improved.

Moreover, particularly in this embodiment, the game program has the information processing device3function as the object behavior control processing part45for controlling the behavior of the female character19on the basis of the touched portion of the female character19in at least either one of the case that it is determined by the first touch determination processing part29that there was a touch and the case that it is determined by the second touch determination processing part35that there was a touch.

As a result, in the case that the female character19is a character having emotions, for example, the female character19can be made offended or happy to the contrary by touching by the user of a specific portion of the female character19, or the like. Therefore, attractiveness of the game can be improved.

Moreover, particularly in this embodiment, the information processing device3for conducting transmission/reception of a signal with the display unit13constituted capable of being attached to the head part of the user and displaying an image, and with the direction detector15for detecting the direction of the head part is made to function as the camera-direction control processing part23for controlling the direction of the virtual camera47on the basis of the direction of the head part detected by the direction detector15.

As a result, since the user can change the direction of the virtual camera47(the sight line direction) in the virtual space in accordance with the direction of the head part, the user can experience so-called virtual reality. Therefore, a feeling of immersion in the game can be drastically improved.

Moreover, particularly in this embodiment, the information processing device3is configured to conduct transmission/reception of a signal with the position detector17for detecting the position of the head part, and the camera-position control processing part21controls the position of the virtual camera47on the basis of the position of the head part detected by the position detector17.

As a result, the user can get closer to, leave away from, or touch the female character19by moving the body and by moving the head part. Therefore, the user can feel a touch by the head part of the user with the female character19more realistically, whereby reality of the virtual space can be further improved.

8. Modification Example and the Like

Note that, the present invention is not limited to the embodiment and is capable of various modifications within a range not departing from the gist and technical idea thereof.

For example, the case that, in the game program of the present invention, the VR mode is implemented, has been described as an example, but the present invention can be applied also to a game in which the VR mode is not implemented.

Moreover, the case that the present invention is applied to a game for observing an object, has been described as an example, but the present invention can be applied also to games in other genres such as a horror game, a love game, a simulation game, a role playing game, an action game, an adventure game and the like. Particularly, the present invention is suitable for an experience-type game in which the VR mode is implemented.

Moreover, the case that the present invention is a game program has been described, but the present invention can be applied also to an art other than the game, (CAD, computer simulation and the like, for example) such as an image generation program and the like.

Techniques by the embodiment and each modified example may be appropriately combined and utilized in addition to the examples having already described above. Although exemplification is not performed one by one, the embodiment and each modified example are carried out by various changes being applied thereto without departing from the technical idea of the present invention.

Claims

  1. A game processing method executed by an information processing device, comprising: generating an image of a first-person view point of a virtual camera in a virtual three-dimensional space at a first position of a plurality of positions set in advance;switching the first-person view point from the first position of the virtual camera for generating the image to a second position of the plurality of positions based on an operation input of a user;controlling a direction of the virtual camera based on the operation input of the user;and correcting the direction of the virtual camera based on a positional relationship between a predetermined object present in the virtual three-dimensional space and the virtual camera when the virtual camera is switched from the first position to the second position by changing a rotating angle of the virtual camera about a vertical axis in the virtual three-dimensional space without changing an elevation/depression angle of the virtual camera with respect to a horizontal plane in the virtual three-dimensional space.
  1. The game processing method according to claim 1 , wherein the plurality of positions each have a preset rotating angle set in advance and a preset elevation/depression angle set in advance, and wherein the correcting of the direction of the virtual camera when the virtual camera is switched from the first position to the second position includes changing the rotating angle of the virtual camera to the preset rotating angle of the second position without changing the elevation/depression angle of the virtual camera to the preset elevation/depression angle of the second position.
  2. The game processing method according to claim 2 , further comprising: detecting a direction of a head part of the user, wherein the controlling of the direction of the virtual camera comprises controlling the direction of the virtual camera based on the direction of the head part detected.
  3. The game processing method according to claim 3 , further comprising: detecting a position of the head part of the user;and controlling a position of the virtual camera based on the position of the head part detected in a predetermined range around a current position of the plurality of positions at which the virtual camera is located.
  4. The game processing method according to claim 4 , further comprising: detecting to what portion in the object a line of sight of the user is directed based on at least one of the direction of the virtual camera and the position of the virtual camera;and controlling a behavior of the object based on the detected portion.
  5. A non-transitory recording medium readable by an information processing device, the recording medium storing a game program programmed to cause the information processing device to: generate an image of a first-person view point of a virtual camera in a virtual three-dimensional space at a first position of a plurality of positions set in advance;switch the first-person view point from the first position of the virtual camera for generating the image to a second position of the plurality of positions based on an operation input of a user;control a direction of the virtual camera based on the operation input of the user;and correct the direction of the virtual camera based on a positional relationship between a predetermined object present in the virtual three-dimensional space and the virtual camera when the virtual camera is switched from the first position to the second position by changing a rotating angle of the virtual camera about a vertical axis in the virtual three-dimensional space without changing an elevation/depression angle of the virtual camera with respect to a horizontal plane in the virtual three-dimensional space.
  6. The recording medium according to claim 6 , wherein the plurality of positions each have a preset rotating angle set in advance and a preset elevation/depression angle set in advance, and wherein the correcting of the direction of the virtual camera when the virtual camera is switched from the first position to the second position includes changing the rotating angle of the virtual camera to the preset rotating angle of the second position without changing the elevation/depression angle of the virtual camera to the preset elevation/depression angle of the second position.
  7. The recording medium according to claim 7 , wherein the game program is further programmed to cause the information processing device to transmit or receive a signal with a display unit to display the image and a direction detector to detect a direction of a head part of the user, the display unit and the direction detector are configured to be attached to the head part, and wherein the correcting of the direction of the virtual camera when the virtual camera is switched from the first position to the second position includes controlling the direction of the virtual camera based on the direction of the head part detected by the direction detector.
  8. The recording medium according to claim 8 , wherein the game program is further programmed to cause the information processing device to transmit or receive a signal with a position detector to detect a position of the head part, and wherein the game program is further programmed to cause the information processing device to control a position of the virtual camera based on the position of the head part detected by the position detector in a predetermined range around a current position of the plurality of positions at which the virtual camera is located.
  9. The recording medium according to claim 9 , wherein the game program is further programmed to cause the information processing device to: detect to what portion in the object a line of sight of the user is directed based on at least one of the direction of the virtual camera and the position of the virtual camera;and control a behavior of the object based on the portion detected using the line of sight.

Disclaimer: Data collected from the USPTO and may be malformed, incomplete, and/or otherwise inaccurate.