U.S. Pat. No. 11,020,663

VIDEO GAME WITH AUTOMATED SCREEN SHOTS

AssigneeGREE, INC.

Issue DateJuly 5, 2018

Illustrative Figure

Abstract

A system obtains game medium information associated with a game medium in a virtual space. When a predetermined event occurs, the system determines a generation condition for an image of the virtual space based on the game medium information. The system generates an image including the game medium based on the generation condition.

Description

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS First Embodiment A game processing program, a game processing method, and a game processing device according to a first embodiment will now be described. In the present embodiment, the case of providing a user device10with a game in which characters battle with each other will be described. [User Device10] The user device10is a computer terminal (game processing device) operated by a user. The user device10executes various applications and is used to output and input information. As shown inFIG. 1, the user device10includes a controller20, a memory30, a transceiver40, and a display50. The memory30includes game field information31, object information32, a game history33, an image generation condition34, and an image generation history35. The game field information31is used to render the background of a game field, which is a three-dimensional virtual space. As shown inFIG. 2, the game field information31includes identification information of the game field (field ID) and attribute information of a geographic element included in the game field (for example, the type of geographic element, size, and position coordinates in the game field). The object information32relates to the attribute of an object placed in the game field. As shown inFIG. 3A, the object information32includes information on the character placed in the game field as an object (for example, size (height) of the character). Further, as shown inFIG. 3B, the object information32includes information on a body placed in the game field as an object (for example, the size of the body). Further, as shown inFIG. 3C, the object information32includes information on an activity of the character in the game field (for example, the activity content of the character corresponding to input operation of the user). In the activity content of the character corresponding to the input operation of the user, for example, the character walks when ...

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS

First Embodiment

A game processing program, a game processing method, and a game processing device according to a first embodiment will now be described. In the present embodiment, the case of providing a user device10with a game in which characters battle with each other will be described.

[User Device10]

The user device10is a computer terminal (game processing device) operated by a user. The user device10executes various applications and is used to output and input information.

As shown inFIG. 1, the user device10includes a controller20, a memory30, a transceiver40, and a display50.

The memory30includes game field information31, object information32, a game history33, an image generation condition34, and an image generation history35.

The game field information31is used to render the background of a game field, which is a three-dimensional virtual space. As shown inFIG. 2, the game field information31includes identification information of the game field (field ID) and attribute information of a geographic element included in the game field (for example, the type of geographic element, size, and position coordinates in the game field).

The object information32relates to the attribute of an object placed in the game field. As shown inFIG. 3A, the object information32includes information on the character placed in the game field as an object (for example, size (height) of the character). Further, as shown inFIG. 3B, the object information32includes information on a body placed in the game field as an object (for example, the size of the body). Further, as shown inFIG. 3C, the object information32includes information on an activity of the character in the game field (for example, the activity content of the character corresponding to input operation of the user). In the activity content of the character corresponding to the input operation of the user, for example, the character walks when the user taps a predetermined position and jumps when the user taps the predetermined position twice in a row (double-tap).

The game history33is history information of the character included in a scene of the game and is updated as the game progresses. As shown inFIG. 4, the game history33includes information on the type of activity of each character of the corresponding playing time point of the game (activity ID) and information on the state of each character (for example, the position coordinates in the game field, battle history, health value, and the amount of damage given to the opponent).

The image generation condition34defines a condition in which an image of a scene of the game is generated as a still image optimized based on the attribute of an object included in the scene during the game progress. As shown inFIG. 5, the image generation condition34includes information on a position serving as a viewpoint, a direction of a sight line, and the angle of view when the image of the scene of the game is generated. The image generation condition34differs depending on the attribute of the object included in the scene of the game. When the progress state of the game satisfies a predetermined condition (for example, the health value of the character is zero), the image generation condition34is determined based on the progress state of the game.

The image generation history35includes information on the image of the game field that has been generated based on the image generation condition34during the game progress.

The controller20functions as a game manager21, a display controller22, and an SNS processor23by executing the game processing program.

The game manager21receives an operation signal from an operation input interface60operated by the user. In response to the received operation signal, the game manager21identifies the state in which the character is operated by the user.

The game manager21manages the progress of the game by the user. More specifically, the game manager21moves the character in the game field based on the state of operation of the character by the user. Further, when the character operated by the user (user-operated character) approaches the character serving as a battle opponent, the game manager21starts a battle between the characters.

The game manager21holds trigger conditions for performing photographing (image generation). When an event in the game (for example, attacking, walking, or clearing of a difficult state) matches a content that is set as the trigger condition, the game manager21determines the image generation condition34based on the attribute of the object included in the scene of the game. Examples of the trigger condition include a condition in which the user-operated character performs a special attack such as a finishing move and a condition in which the user-operated character starts battling with a boss character, which is stronger than normal enemy characters (i.e., associated with a parameter value such as a higher health value or attack ability). Further, the game manager21generates an image of the game field based on the image generation condition34. In addition, the game manager21identifies a region of the generated image other than the object included in the image as a vacant space and appends a message to the vacant space. In this case, the game manager21may append a set phrase as a message or append a message generated based on the game history33. The game manager21adds the image of the game field including the appended message to the image generation history35.

The display controller22extracts the game field information31corresponding to the viewpoint of the user-operated character. The display controller22transmits the extracted game field information31to a display50as an image signal. Further, the display controller22extracts images of the user-operated character and the character serving as the battle opponent from the object information32and transmits information corresponding to the extracted images to the display50as an image signal.

The SNS processor23executes a process for using a social networking service (SNS). The SNS processor23retrieves an image of the game field generated during the game progress from the image generation history35and transmits the image to an SNS server100via the transceiver40. In this case, the SNS processor23may obtain a post history on an SNS and automatically select where to post based on the post history. Alternatively, the user may set on which SNS the user posts. The user may set where to post during the initial setting or when a predetermined event ends (for example, when a stage ends) in the game played by the user.

The transceiver40communicates with a server device or other user devices via a network.

[Game Process]

A process for transmitting an image of the game field generated during the game progress to the SNS server will now be described with reference toFIGS. 6 to 8.

The controller20determines whether or not the event in the game matches the trigger condition (step S10). More specifically, the game manager21of the controller20monitors a progress state of the event in the game and compares the progress state with the trigger condition set in advance.

If the progress state of the event in the game matches the trigger condition (“YES” in step S10), the controller20identifies an object included in the scene of the game (step S20). More specifically, the game manager21of the controller20identifies the position of the user-operated character in the game field. Further, the game manager21identifies other objects located in the vicinity of the position of the character based on position information in the object information32.

Subsequently, the controller20determines the image generation condition34(step S30).

As shown inFIG. 7, in a determination process for the image generation condition34, the controller20identifies the attribute of the object (step S30A). More specifically, the game manager21extracts the object information32of the object identified in step S20and obtains information on the character included in the scene of the game, information on a body included in the scene of the game, and information on an activity of the character.

Then, the controller20identifies the progress state of the game (step S30B). More specifically, the game manager21extracts the game history33of the character identified as the object in step S20and identifies information on the state of the character included in the scene of the game.

Afterwards, the controller20determines whether or not the progress state of the game satisfies a predetermined condition (step S30C). More specifically, the game manager21determines whether or not the state of the character identified in step S30B satisfies a condition suitable for determining the image generation condition34. For example, the condition suitable for determining the image generation condition34is that the health value of the character is zero.

If the progress state of the game satisfies the predetermined condition (YES in step S30C), the controller20determines the image generation condition34based on the progress state of the game (step S30D). More specifically, for example, when the health value of the character is zero, the game manager21of the controller20determines the image generation condition34for generating an image in which the character having a health value of zero is looked down at by setting, as a viewpoint position, the position of an enemy character serving as a battle opponent.

If the progress state of the game does not satisfy the predetermined condition (“NO” in step S30C), the controller20selects the image generation condition34corresponding to the attribute of the object (step S30E). More specifically, the game manager21obtains a character ID, a body ID, and an activity ID as an example of the attribute of the object included in the scene of the game and selects the image generation condition34corresponding to the combination of the obtained IDs.

Referring back toFIG. 6, the controller20generates an image of the game field (step S40). More specifically, the game manager21retrieves information on the viewpoint position, direction, and angle of view defined in the image generation condition34. In the game field corresponding to the game field information31, the game manager21generates an image from the viewpoint coordinates in the game field identified based on the viewpoint position.

Subsequently, the controller20determines whether or not there is a vacant space in the generated image (step S50). More specifically, the game manager21identifies a region of the image of the game field occupied by the object based on the position information in the object information32. The game manager21determines whether or not there is a region in the image of the game field other than the object.

If the controller20determines that there is a vacant space (“YES” in step S50), the controller20determines a message to be displayed in the vacant space (step S60).

As shown inFIG. 8, in a determination process for a message, the controller20identifies the attribute of the object (step S60A). More specifically, the game manager21extracts the object information32of the object identified in step S20and obtains information on the character included in the scene of the game, information on a body included in the scene of the game, and information on an activity of the character.

Then, the controller20identifies the progress state of the game (step S60B). More specifically, the game manager21extracts the game history33of the character identified as the object in step S20and identifies information on the state of the character included in the scene of the game.

Afterwards, the controller20determines whether or not the progress state of the game satisfies a predetermined condition (step S60C). More specifically, the game manager21determines whether or not the state of the character identified in step S60B satisfies a condition suitable for generating the message. For example, the condition suitable for generating the message is that there is a battle history with an enemy character and that the playing time of the game until reaching the current stage is within a limited time.

If the progress state of the game satisfies the predetermined condition (“YES” in step S60C), the controller20generates the message based on the progress state of the game (step S60D). More specifically, when there is a battle history with an enemy character, the game manager21generates a message including the number of times of battling with the enemy character (for example, “this is Xth challenge to the boss!”). Further, when the playing time of the game until reaching the current stage is within a limited time, the game manager21generates a message including the playing time of the game (for example, “Y hours until this stage!”).

If the progress state of the game does not satisfy the predetermined condition (“NO” in step S60C), the controller20selects the message corresponding to the attribute of the object (step S60E). More specifically, the game manager21obtains a character ID, a body ID, and an activity ID as an example of the attribute of the object included in the scene of the game and selects the message corresponding to the combination of the obtained IDs.

Referring back toFIG. 6, the controller20appends the message to the image of the game field (step S70). More specifically, the game manager21appends the generated message as described above to the vacant space in the image of the game field.

If the controller20determines that there is no vacant space (“NO” in step S50), the controller20does not append the message to the image of the game field.

Subsequently, the controller20saves the image of the game field in the memory30(step S80). More specifically, when there is a vacant space in the image of the game field, the game manager21adds the image of the game field, to which the message is appended, to the image generation history35. When there is no vacant space, the game manager21adds the image of the game field generated based on the image generation condition34to the image generation history35.

Then, the controller20posts the image on an SNS (step S90). More specifically, the SNS processor23retrieves the image of the game field generated during the game progress from the image generation history35and transmits the image to the SNS server100via the transceiver40.

The images of the game field generated during the game progress will now be described with reference toFIGS. 9A to 9D.

FIG. 9Aschematically shows the positional relationship of objects S1to S4on the game field in a scene of the game. As shown inFIG. 9A, the game manager21sets, as a play view, the image viewed from a virtual camera X1.

As shown inFIG. 9B, in the present embodiment, the display controller22displays the image in which the objects S1to S4on the game field are viewed from the front on the display50as a play view. That is, the display controller22controls the display50so as to display the image. In this case, the display controller22displays the object S1(structure) placed in the game field as a body, the user-operated object S2(ally character), and the battle opponent objects S3and S4(enemy characters) on the display50.

Further, as shown inFIG. 9A, the game manager21sets the image viewed from a virtual camera X2on the game field as a generated view.

As shown inFIG. 9C, in the present embodiment, the game manager21sets, as a generated view, the image in which the objects on the game field are obliquely viewed with the user-operated object located in the center. In this case, the game manager21sets, as a generated view, the image including the user-operated object S2(ally character) and the object S3(enemy character) facing the object S2.

Additionally, as shown inFIG. 9D, the game manager21identifies the vacant space in the generated view shown inFIG. 9Cand appends a message to the identified vacant space. In this case, the game manager21generates a message M1based on the game history33. More specifically, as the game progresses, when the object S2(ally character) was previously defeated by the battling opponent object S3(enemy character), the game manager21generates the message M1reflecting the defeat. The game manager21appends the generated message M1to the vicinity of the user-operated object S2(ally character).

Another example of images of the game field generated during the game progress will now be described with reference toFIGS. 10A to 10C.

FIG. 10Ashows an example in which the image generation condition34is determined based on the size of an object included in the scene of the game. In this case, the size of the user-operated object S2(ally character) is smaller than that of a battling opponent object S3a(enemy character). Thus, the game manager21generates the image of the game field from the viewpoint in which the user-operated object S2looks up at the battling opponent object S3a.

FIG. 10Bshows an example in which the image generation condition34is determined based on the relative positions of multiple objects included in the scene of the game. In this case, the game manager21generates the image of the game field from the viewpoint in which the user-operated object S2(ally character) and a battling opponent object S3β (enemy character) are viewed laterally.

FIG. 10Cshows an example in which the image generation condition34is determined based on the attribute of the object included in the scene of the game and the background of the game field. In this case, the game manager21generates the image of the game field from the viewpoint in which the user-operated object S2(ally character) and a battling opponent object S3γ (enemy character) are viewed laterally so as to include a geographic element ST (sun) serving as the background of the game field.

As described above, the first embodiment has the following advantages.

(1-1) In the first embodiment, when a scene of the game matches a content set as a trigger condition, the image generation condition34is determined based on the attribute of an object included in the scene of the game. The image of the game field including the object is generated based on the determined image generation condition34. This allows the image of the game field including the object to be automatically generated without operation of the user.

(1-2) In the first embodiment, the image generation condition34is determined based on the size of the object included in the scene of the game, the relative positions of multiple objects, and the background of the game field. This allows a wide variety of images of the game field to be provided.

(1-3) In the first embodiment, the message M1is displayed in a region of the image of the game field other than the object included in the image. This allows the message M1to be displayed without interfering with the object included in the image of the game field. Further, the appending of the message to the scene of the game allows other users to easily acknowledge what the scene is. Additionally, when a message is automatically appended, the user saves time to post the message.

(1-4) In the first embodiment, the image generation condition34is determined based on the attribute of the object set in advance in the memory30and the attribute of the object that is changed when the game progresses. This allows for a wide variety of images of the game field generated based on the attribute of the object.

(1-5) In the first embodiment, the message M1is generated based on the game history33that accumulates as the game progresses. This allows for a wide variety of images of the message M1displayed in the image of the game field.

(1-6) When other users check a wide variety of posted images, users who have already played the game and users who have not played the game are both motivated to play the game.

Second Embodiment

A game processing program, a game processing method, and a game processing device according to a second embodiment will now be described. In the second embodiment, the determination process for an image generation condition and the determination process for a message in the first embodiment are partially modified. Like or same reference numerals are given to those components that are the same as the corresponding components of the first embodiment. Such components will not be described in detail.

In the game of the second embodiment, an image of the game field is generated as the game progresses in the same manner as the first embodiment. In the second embodiment, it is assumed that two users advance the game simultaneously, and the object (ally character) operated by each user is placed in a common game field. The image generation condition is determined based on the relative positions of the objects in the game field. More specifically, when the positions of the objects in the game field are proximate to each other, the image generation condition is determined so as to generate an image including both objects. When the positions of the objects in the game field are spaced apart from each other, the image generation condition is determined so as to generate an image including only the object operated by one of the users.

As shown inFIG. 11, the controller20determines whether or not the event in the game matches the trigger condition (step S110) in the same manner as step S10.

If the progress state of the event in the game matches the trigger condition (“YES” in step S110), the controller20identifies an object included in the scene of the game (step S120) in the same manner as step S20.

Subsequently, the controller20identifies the user operating each object included in the scene of the game (step S130). More specifically, the game manager21identifies the user operating each object by performing communication with the user devices10operated by other users via the transceiver40.

Then, the controller20determines whether or not the number of users is two or more (step S140). More specifically, the game manager21calculates the number of the identified users.

If the number of the users is two or more (“YES” in step S140), the controller20determines whether or not the user-operated objects are proximate to each other (step S150). More specifically, the game manager21extracts the object information32of the user-operated objects and identifies the position coordinates of each object in the game field. The game manager21compares the distance between the objects in the game field with threshold values.

If the user-operated objects are proximate to each other (“YES” in step S150), the controller20uses a first method to determine the image generation condition34(step S160).

As shown inFIG. 12, in a determination process for the image generation condition34using the first method, the controller20identifies the attributes of objects of multiple users (step S160A). More specifically, the game manager21obtains information on multiple user-operated characters identified in step S130and information on the activities of the characters based on the object information32. Further, the game manager21obtains information on a body located in the vicinity of each user-operated character based on the object information32.

Subsequently, the controller20identifies progress states of the game of the multiple users (step S160B). More specifically, the game manager21extracts the game history33of the multiple user-operated characters identified in step S130and obtains information on the state of each user-operated character.

Then, the controller20determines whether or not any one of the progress states of the game of the user satisfies a predetermined condition (step S160C). More specifically, the game manager21determines whether or not at least one of the progress states of the user-operated characters identified in step S160B satisfies the condition suitable for determining the image generation condition34.

If at least one of the progress states of the game satisfies the predetermined condition (“YES” in step S160C), the controller20determines the image generation condition34based on the progress state of the game satisfying the predetermined condition (step S160D). More specifically, when at least one of the progress states of the user-operated characters satisfies the condition suitable for determining the image generation condition34, the game manager21determines the image generation condition34for generating an image suitable for the state of the character. In this case, the game manager21determines the image generation condition34so as to generate an image including the two user-operated characters based on the attributes of the objects of the multiple users identified in step S160A.

If the predetermined condition is not satisfied by any one of the progress states of the game (“NO” in step S160C), the controller20selects the image generation condition34corresponding to the attributes of the objects of the multiples users (step S160E). More specifically, the game manager21obtains character IDs, body IDs, and activity IDs as an example of the attributes of the objects of the multiple users identified in step S160A and selects the image generation condition34corresponding to the combination of the obtained IDs.

Referring back toFIG. 11, if the user-operated objects are spaced apart from each other (“NO” in step S150), the controller20uses a second method to determine the image generation condition34(step S170).

As shown inFIG. 13, in a determination process for the image generation condition34using the second method, the controller20identifies the attribute of an object of a first user (step S170A). More specifically, the game manager21obtains information on the character operated by the first user identified in step S130(first user-operated character) and information on the activity of the character based on the object information32. Further, the game manager21obtains information on a body located in the vicinity of the first user-operated character based on the object information32.

Subsequently, the controller20identifies the progress state of the game of the first user (step S170B). More specifically, the game manager21extracts the game history33of the first user-operated character identified in step S130and obtains information on the state of the first user-operated character.

Then, the controller20determines whether or not the progress state of the game of the first user satisfies a predetermined condition (step S170C). More specifically, the game manager21determines whether or not the progress state of the first user-operated character identified in step S170B satisfies the condition suitable for determining the image generation condition34.

If the progress state of the game satisfies the predetermined condition (“YES” in step S170C), the controller20determines the image generation condition34based on the progress state of the game (step S170D). More specifically, when the progress state of the first user-operated character satisfies the condition suitable for determining the image generation condition34, the game manager21determines the image generation condition34for generating an image suitable for the state of the character. In this case, the game manager21determines the image generation condition34in which the first user-operated character is included in the image based on the attribute of the object of the first user identified in step S170A.

If the progress state of the game does not satisfy the predetermined condition (“NO” in step S170C), the controller20selects the image generation condition34corresponding to the attribute of the object of the first user (step S170E). More specifically, the game manager21obtains a character ID, a body ID, and an activity ID as an example of the attribute of the object of the first user identified in step S170A and selects the image generation condition34corresponding to the combination of the obtained IDs.

Referring back toFIG. 11, the controller20generates an image of the game field (step S180) in the same manner as step S40.

Subsequently, the controller20determines whether or not there is a vacant space in the generated image (step S190) in the same manner as step S50.

If the controller20determines that there is a vacant space (“YES” in step S190), the controller20determines a message to be displayed in the vacant space (step S200).

As shown inFIG. 14, in a determination process for a message, the controller20determines whether or not the image generation condition34is determined using the first method (step S200A). More specifically, the game manager21determines whether or not the number of users is two or more and whether or not the user-operated objects are proximate to each other.

If the image generation condition34is determined using the first method (“YES” in step S200A), the controller20identifies the attributes of objects of multiple users (step S200B). More specifically, the game manager21obtains information on multiple user-operated characters identified in step S130and information on the activities of the characters based on the object information32. Further, the game manager21obtains information on a body located in the vicinity of each user-operated character based on the object information32.

Subsequently, the controller20identifies the progress states of the game of the multiple users (step S200C). More specifically, the game manager21extracts the game history33of the multiple user-operated characters operated identified in step S130and obtains information on the state of each user-operated character.

Then, the controller20determines whether or not any one of the progress states of the game of the user satisfies a predetermined condition (step S200D). More specifically, the game manager21determines whether or not at least one of the progress states of the user-operated characters identified in step S200C satisfies the condition suitable for generating the message.

If at least one of the progress states of the game satisfies the predetermined condition (“YES” in step S200D), the controller20generates the message based on the progress state of the game satisfying the predetermined condition (step S200E). More specifically, when at least one of the progress states of the user-operated characters satisfies the condition suitable for generating the message, the game manager21generates the message suitable for the state of the character.

If the predetermined condition is not satisfied by any one of the progress states of the game (“NO” in step S200D), the controller20selects the message corresponding to the attributes of the objects of the multiples users (step S200F). More specifically, the game manager21obtains character IDs, body IDs, and activity IDs as an example of the attributes of the objects of the multiple users identified in step S130and selects the message corresponding to the combination of the obtained IDs.

Additionally, if the image generation condition34is not determined using the first method, that is, if the image generation condition34is determined using the second method (“NO” in step S200A), the controller20identifies the attribute of an object of the first user (step S200G). More specifically, the game manager21obtains information on the first user-operated character identified in step S130and information on the activity of the character based on the object information32. Further, the game manager21obtains information on a body located in the vicinity of the first user-operated character based on the object information32.

Subsequently, the controller20identifies the progress state of the game of the first user (step S200H). More specifically, the game manager21extracts the game history33of the first user-operated character identified in step S130and obtains information on the state of the first user-operated character.

Then, the controller20determines whether or not the progress state of the game satisfies a predetermined condition (step S200I). More specifically, the game manager21determines whether or not the progress state of the first user-operated character identified in step S200H satisfies the condition suitable for generating the message.

If the progress state of the game satisfies the predetermined condition (“YES” in step S200I), the controller20generates the message based on the progress state of the game (step S200J). More specifically, when the progress state of the first user-operated character satisfies the condition suitable for generating the message, the game manager21generates a message suitable for the state of the character.

If the predetermined condition is not satisfied by the progress state of the game (“NO” in step S2001), the controller20selects the message corresponding to the attribute of the object of the first user (step S200K). More specifically, the game manager21obtains a character ID, a body ID, and an activity ID as an example of the attribute of the object of the first user identified in step S130and selects the message corresponding to the combination of the obtained IDs.

Referring back toFIG. 11, the controller20appends the message to the image of the game field (step S210) in the same manner as step S70.

If the controller20determines that there is no vacant space (“NO” in step S190), the controller20does not append the message to the image of the game field.

Subsequently, the controller20saves the image of the game field in the memory30(step S220) in the same manner as step S80.

Afterwards, the controller20determines whether or not the image of the game field includes multiple user-operated objects (step S230). More specifically, the SNS processor23refers to the object information32associated with the image of the game field to identify the user who operates each object. The controller20determines whether or not the number of users operating the object is two or more.

If the image of the game field includes multiple user-operated objects (“YES” in step S230), the controller20tags the image to the users operating the user-operated objects and posts the tagged image on the SNS (step S240). More specifically, the SNS processor23retrieves the image of the game field generated during the game progress from the image generation history35and extracts information on the users identified from the image of the game field. The SNS processor23associates the information on the user with the image of the game field and transmits the information to the SNS server100via the transceiver40.

If the image of the game field does not include multiple user-operated objects (“NO” in step S230), the controller20posts the image on the SNS (step S250). More specifically, the SNS processor23retrieves the image of the game field generated during the game progress from the image generation history35and transmits the image to the SNS server100via the transceiver40.

The images of the game field generated when multiple user-operated objects are proximate to each other will now be described with reference toFIGS. 15A to 15D.

FIG. 15Aschematically shows the positional relationship of objects S11to S15on the game field in a scene of the game. As shown inFIG. 15A, the game manager21sets the image viewed from a virtual camera X11as a play view.

As shown inFIG. 15B, in the present embodiment, the display controller22displays, on the display50as a play view, the image in which the objects S11to S15on the game field are viewed from the front. In this case, the display controller22displays the object S11(structure) placed in the game field as a body, the object S12operated by the first user (first user-operated object, ally character), the object S13operated by the second user (second user-operated object, ally character), and the battle opponent objects S14and S15(enemy characters) on the display50.

Further, as shown inFIG. 15A, the game manager21sets, as a generated view, the image viewed from a virtual camera X12on the game field.

As shown inFIG. 15C, in the present embodiment, the game manager21sets, as a generated view, the image in which the objects on the game field are obliquely viewed based on the relative positions of the first user-operated object S12and the second user-operated object S13. In this case, the game manager21sets, as the generated view, the image including the first user-operated object S12(ally character) and the second user-operated object S13(ally character).

Further, as shown inFIG. 15D, the game manager21identifies the vacant space in the generated view shown inFIG. 15Cand appends a message to the identified vacant space. In this case, the game manager21generates a message M2based on information on the second user. More specifically, when the first-user operated object S12(ally character) and the second user-operated object S13(ally character) simultaneously attack the battle opponent objects S14and S15(enemy characters), the game manager21generates the message M2reflecting the attack. The game manager21appends the generated message M2to the vicinity of the first user-operated object S12.

The images of the game field generated when multiple user-operated objects are spaced apart from each other will now be described with reference toFIGS. 16A to 16D.

FIG. 16Aschematically shows the positional relationship of objects S11to S15on the game field in a scene of the game. As shown inFIG. 16A, the game manager21sets the image viewed from the virtual camera X11as a play view.

As shown inFIG. 16B, in the present embodiment, the display controller22displays, on the display50as a play view, the image in which the objects S11to S15on the game field are viewed from the front. In this case, the display controller22displays the object S11(structure) placed in the game field as a body, the first user-operated object S12(ally character), the second user-operated object S13(ally character), and the battle opponent objects S14and S15(enemy characters) on the display50.

Further, as shown inFIG. 16A, the game manager21sets, as a generated view, the image viewed from a virtual camera X13on the game field.

As shown inFIG. 16C, in the present embodiment, the game manager21sets, as a generated view, the image in which the objects on the game field are obliquely viewed based on the position of the first user-operated object S12. In this case, the game manager21sets, as the generated view, the image including the first user-operated object S12(ally character) without including the second user-operated object S13(ally character).

Further, as shown inFIG. 16D, the game manager21identifies the vacant space in the generated view shown inFIG. 16Cand appends a message to the identified vacant space. In this case, the game manager21generates a message M3based on the attribute of the first user-operated object S12. More specifically, when the first user-operated object S12(ally character) performs a special attack, the game manager21generates the message M3reflecting the special attack. The game manager21appends the generated message M3to the vicinity of the first user-operated object S12.

As described above, the second embodiment has the following advantages in addition to the advantages of the first embodiment.

(2-1) In the second embodiment, the image generation condition34is determined so that multiple user-operated objects in the game field are included in an image. This allows the image of the game field to be generated so as to indicate the relationship of the objects in the scene of the game.

(2-2) In the second embodiment, multiple user-operated objects included in an image are determined based on the relative positions of the objects. Thus, the combination of the objects included in the image of the game field can be changed depending on the scene of the game.

(2-3) In the second embodiment, other users who operate objects included in a scene of the game are identified, the identified users are tagged to an image, and the tagged image is transmitted to the SNS server100. This allows the image of the game field, in which multiple users are associated with each other, to be transmitted and thus spreads the image of the game field on the SNS.

(2-4) In the second embodiment, a wide variety of posted images are checked by other users on the SNS. This motivates them to play the game.

Each of the above embodiments may be modified as described below.

In the description of the second embodiment, the game in which two users play the game is provided. Instead, a game in which three or more users play the game may be provided. In this case, when three or more user-operated objects are proximate to each other, the controller20may determine the image generation condition34so that the image includes all the objects. This allows for generation of an image of the game field that can gain sympathy of many users.

In the second embodiment, other users operating the object S13included in the scene of the game are tagged to the image, and the tagged image is transmitted to the SNS server100. The transmission of the image of the game field may not involve tagging other users to the image. For example, the controller20may transmit the image of the game field to a shared folder of an external server, which is shared among multiple users. Further, the controller20may change the SNS server100, to which the image of the game field is transmitted, for each of the users. This facilitates each user to manage the image of the game field. As a result, each user views the image of the game field more frequently and is thus motivated to play the game.

In the second embodiment, objects to be included in the image are determined based on the relative positions of multiple user-operated objects. The objects to be included in the image do not have to be determined based on the relative positions of multiple user-operated objects. For example, the controller20may set priority as the attribute of an object and determine the image generation condition34so as to generate the image having the highest priority. Further, the controller20may determine the image generation condition34without taking into consideration the relative positions of multiple user-operated objects in the game field. In this case, the controller20may determine the image generation condition34so as to generate the image including multiple user-operated objects. Thus, regardless of the relative positions of multiple user-operated objects in the game field, an image in which multiple users are forming a party and playing the game can be generated. When such an image is posted on the SNS, a person viewing the image is motivated to play the game. Alternatively, the controller20may determine the image generation condition34so as to generate an image including only one of the user-operated objects. Thus, regardless of the relative positions of multiple user-operated objects in the game field, an image in which a single user is uniquely playing the game can be generated. When such an image is posted on the SNS, a person viewing the image is motivated to play the game.

In each of the above embodiments, a message is generated based on the game history33, which accumulates as the game progresses. The message does not have to be generated based on the game history33. For example, the controller20may generate a message based on an activity history of the user obtained through an information processing terminal, for example, a posting history on the SNS and a browsing history on the internet. This allows for generation of a message matching features of the user.

In each of the above embodiments, a message is displayed in a region of the image of the game field other than the object in the image. The display position of a message does not have to be determined based on a region of the image occupied by the object. For example, the controller20may narrow down the display position of the message in a vacant space of the image of the game field based on the attribute of the object. This optimizes the display position of the message and increases the effect of rendering the game. Further, the controller20may determine the display position of the message based on the attribute of the object without taking into consideration the region of the image occupied by the object. This allows the message to be displayed without limiting the number of characters.

In each of the above embodiments, a message is appended to the image of the game field when there is a vacant space. The controller20does not have to append a message to the image of the game field regardless of whether or not there is a vacant space. This prevents the image of the game field from being interfered with by a displayed message and thus avoids situations in which the message lowers the high artistry of the image of the game field such as an image photographed by a photographer.

In each of the above embodiments, the parameter related to a user-operated object (for example, the position and size of the object) is applied as the attribute of the object used to determine the image generation condition34. Instead, the type of an item used for battle by the user-operated object (for example, an ax and a bow) may be applied as the attribute of the object used to determine the image generation condition34. In this case, a region influenced by the effect of the item may be factored in the image generation condition. For example, subsequent to the image illustrating the moment the character draws an arrow with a bow, the angle of view may be enlarged so as to generate an image including both the character, which drew the bow, and the shot arrow. In contrast, in the case of an item held by the character when in use (for example, an ax), the image can be generated in the angle of view approaching the character. This allows a wider variety of images of the game field to be generated.

In each of the above embodiments, the information on the position, direction, and angle of view serving as the viewpoint when generating an image is determined as the image generation condition34based on the attribute of the object included in the scene of the game. That is, the composition when generating an image is determined based on the attribute of the object included in the scene of the game. The information determined as the image generation condition34is not limited to information on the composition of an image. For example, the controller20may determine the timing of generating an image as the game progresses based on the attribute of the object included in the scene of the game. In this case, the controller20may determine the timing of generating an image in addition to the composition of the image based on the attribute of the object included in the scene of the game. Alternatively, whereas the player manually sets the composition of the image, the controller20may determine the timing of generating an image based on the attribute of the object. As another option, whereas the controller20automatically determines the image generation condition34based on the attribute of the object, the player may manually determine a message subject to display. Conversely, whereas the player manually sets the image generation condition34, the controller20may automatically determine a message subject to display based on the attribute of the object.

In each of the above embodiments, the image of the game field is generated as a still image. Instead, the image of the game field may be generated as a moving image. Thus, a further realistic, appealing image of the game field can be posted on the SNS.

In each of the above embodiments, at least some of the operations and processes executed by the user device may be executed by a server device connected to the user device. For example, any one of the server device and the user device may execute the processes of, for example, display control on various views displayed in the user device and control on various GUIs. As another option, the server device and the user device may cooperate to execute the processes of display control on various views and control on various GUIs. For example, some of the various game views may be displayed by the user device based on the data generated by the server device (i.e., web view), and other game views may be displayed by a native application, which is installed in the user device (i.e., native view). In this manner, the game according to each of the above embodiments may be a hybrid game in which the server device and the user device are each in charge of some of the processes.

An information processing device such as a computer or a mobile phone may be used in a preferred manner in order for the information processing device to function as the server device or the user device according to each of the above embodiments. Such an information processing device can be implemented by storing, in a memory of the information processing device, a program describing the processing content that implements the functions of the server device or the user device according to the embodiments and then reading and executing the program with the CPU of the information processing device.

In the description of each of the above embodiments, the case of providing the game in which objects battle with each other is described as an example of a game. The present disclosure may be applied to other games such as a simulation game in which the player progresses in the game field from the viewpoint of objects placed in the game field. That is, as long as the game controls objects in the game field, the present disclosure may be applied.

The controller20is not limited to one that performs software processing on all processes executed by itself. For example, the controller20may be equipped with a dedicated hardware circuit (e.g., application specific integrated circuit: ASIC) that performs hardware processing on at least some of the processes to be executed by itself. That is, the controller20may be configured as 1) one or more processors that operate in accordance with a computer program (software), 2) one or more dedicated hardware circuits that execute at least some of the various processes, or 3) circuitry including combinations thereof. The processor includes a CPU and memories such as a RAM and a ROM, and the memory stores program codes or instructions configured to cause the CPU to execute the processing. The memories, that is, computer-readable media, include any type of media that are accessible by general-purpose computers and dedicated computers.

Claims

  1. A non-transitory computer-readable medium that stores a computer-executable instruction, wherein the instruction, when executed by circuitry of a system, causes the circuitry to: obtain game object information associated with a first game object in a virtual space of a game, the first game object being one of a plurality of game objects, and the first game object being operated by a first user;in response to a predetermined event occurring, determine a generation condition for an image of the virtual space and the first game object based on the game object information;and automatically generate the image based on the generation condition, wherein the plurality of game objects further include a second game object operated by a second user, and the instruction, when executed by the circuitry, causes the circuitry to identify the first user and the second user and transmit the image as information associated with both the first user and the second user.
  1. The non-transitory computer-readable medium according to claim 1 , wherein the game object information includes information on a size of the first game object.
  2. The non-transitory computer-readable medium according to claim 1 , wherein the game object information includes relative positions of the game objects.
  3. The non-transitory computer-readable medium according to claim 1 , wherein the instruction, when executed by the circuitry, causes the circuitry to determine the generation condition based on an attribute of the first game object and a background of the virtual space.
  4. The non-transitory computer-readable medium according to claim 1 , wherein the instruction, when executed by the circuitry, causes the circuitry to determine the generation condition based on at least one of the game object information set in advance in a memory and the game object information that is changed when the game progresses.
  5. The non-transitory computer-readable medium according to claim 1 , wherein the instruction, when executed by the circuitry, causes the circuitry to: display the first game object and the second game object in a common virtual space;and determine the generation condition to generate the image, the image including the first game object and the second game object.
  6. The non-transitory computer-readable medium according to claim 6 , wherein the instruction, when executed by the circuitry, causes the circuitry to identify one of the plurality of game objects included in the image from among multiple of the game objects included in a scene of the game.
  7. The non-transitory computer-readable medium according to claim 1 , wherein the instruction, when executed by the circuitry, causes the circuitry to display a message in a region of the image other than the first game object included in the image.
  8. The non-transitory computer-readable medium according to claim 8 , wherein the instruction, when executed by the circuitry, causes the circuitry to generate the message based on data that accumulates as the game progresses.
  9. The non-transitory computer-readable medium according to claim 1 , wherein the image is automatically generated by photographing the virtual space and the first game object.
  10. A method comprising: obtaining, by circuitry of a system, game object information associated with a first game object in a virtual space of a game, the first game object being one of a plurality of game objects, and the first game object being operated by a first user;in response to a predetermined event occurring, determining, by the circuitry, a generation condition for an image of the virtual space and the first game object based on the game object information;and automatically generating, by the circuitry, the image based on the generation condition, wherein the plurality of game objects further include a second game object operated by a second user, and the method further includes identifying, by the circuitry, the first user and the second user and transmitting, by the circuitry, the image as information associated with both the first user and the second user.
  11. The method according to claim 11 , wherein the image is automatically generated by photographing the virtual space and the first game object.
  12. A system including circuitry, wherein the circuitry is configured to: obtain game object information associated with a first game object in a virtual space of a game, the first game object being one of a plurality of game objects, and the first game object being operated by a first user;in response to a predetermined event occurring, determine a generation condition for an image of the virtual space and the first game object based on the game object information;and automatically generate the image based on the generation condition, wherein the plurality of game objects further include a second game object operated by a second user, and the circuitry is further configured to identify the first user and the second user and transmit the image as information associated with both the first user and the second user.
  13. The system according to claim 13 , wherein the image is automatically generated by photographing the virtual space and the first game object.

Disclaimer: Data collected from the USPTO and may be malformed, incomplete, and/or otherwise inaccurate.