U.S. Pat. No. 10,438,394
INFORMATION PROCESSING METHOD, VIRTUAL SPACE DELIVERING SYSTEM AND APPARATUS THEREFOR
AssigneeCOLOPL, INC.
Issue DateMarch 1, 2018
Illustrative Figure
Abstract
A method includes defining a virtual space. The virtual space includes a first avatar object and a second avatar object. The first avatar object being associated with a first user terminal. The first user terminal comprising a first head-mounted device (HMD) associated with a first user. The second avatar object being associated with a second user terminal. The second user terminal comprising a second HMD associated with a second user. The method includes defining a visual field in the virtual space in association with a motion of the second HMD. The method includes generating a visual-field image that corresponds to the visual field. The method includes displaying the visual-field image on the second HMD. The method includes receiving first information indicating that the first user is not wearing the first HMD. The method includes changing the visual-field image on the second HMD in response to the first information being received.
Description
DETAILED DESCRIPTION Description of at Least One Embodiment of this Disclosure An outline of at least one embodiment of this disclosure is now described. (1) An information processing method to be executed on a computer in a virtual space delivering system including: a first user terminal including a first head-mounted device to be worn on a head of a first user; and a second user terminal including a second head-mounted device to be worn on a head of a second user, the information processing method including: (a) generating virtual space data for defining a virtual space including a first avatar associated with the first user and a second avatar associated with the second user; (b) updating a visual-field image that is displayed on the second head-mounted device based on a motion of the second head-mounted device and the virtual space data; (d) presenting, to the second user, a fact that the first user is not wearing the first head-mounted device with the second user terminal in response to reception of unworn state information indicating the fact that the first user is not wearing the first head-mounted device. According to the method described above, when the unworn state information indicating the fact that the first user is not wearing the first head-mounted device (hereinafter referred to as “first HMD”) is received, the second user terminal presents to the second user the fact that the first user is not wearing the first HMD. In this manner, the second user can easily grasp the fact that the first user is not wearing the first HMD when the second user is communicating to/from the first user in the virtual space. Therefore, it is possible to provide the user with a rich virtual experience. In particular, the second user may feel strange about the first ...
DETAILED DESCRIPTION
Description of at Least One Embodiment of this Disclosure
An outline of at least one embodiment of this disclosure is now described.
(1) An information processing method to be executed on a computer in a virtual space delivering system including: a first user terminal including a first head-mounted device to be worn on a head of a first user; and a second user terminal including a second head-mounted device to be worn on a head of a second user, the information processing method including: (a) generating virtual space data for defining a virtual space including a first avatar associated with the first user and a second avatar associated with the second user; (b) updating a visual-field image that is displayed on the second head-mounted device based on a motion of the second head-mounted device and the virtual space data; (d) presenting, to the second user, a fact that the first user is not wearing the first head-mounted device with the second user terminal in response to reception of unworn state information indicating the fact that the first user is not wearing the first head-mounted device.
According to the method described above, when the unworn state information indicating the fact that the first user is not wearing the first head-mounted device (hereinafter referred to as “first HMD”) is received, the second user terminal presents to the second user the fact that the first user is not wearing the first HMD. In this manner, the second user can easily grasp the fact that the first user is not wearing the first HMD when the second user is communicating to/from the first user in the virtual space. Therefore, it is possible to provide the user with a rich virtual experience.
In particular, the second user may feel strange about the first user who does not exhibit any reaction when the second user does not grasp the fact that the first user is not wearing the first HMD. In this manner, it is possible to avoid a situation in which the second user feels strange about the first user who does not exhibit any reaction by allowing the second user to grasp the fact that the first user is not wearing the first HMD.
(2) The information processing method according to Item (1), in which the (d) presenting of the fact includes visualizing the fact that the first user is not wearing the first head-mounted device on a field-of-view image displayed on the second head-mounted device when receiving the unworn state information.
According to the method described above, when the unworn state information indicating the fact that the first user is not wearing the first head-mounted device (first HMD) is received, information indicating the fact that the first user is not wearing the first HMD is visualized in the field-of-view image displayed on the second head-mounted device (second HMD). In this manner, the second user can easily grasp the fact that the first user is not wearing the first HMD by visually recognizing, through the second HMD, the information indicating the fact that the first user is not wearing the first HMD when communicating to/from the first user in the virtual space.
(3) The information processing method according to Item (1) or (2), further including (e) updating a facial expression of the first avatar based on received face information representing a facial expression of the first avatar, in which the (d) presenting of the fact includes visualizing the first avatar whose facial expression is set to a first mode on a field-of-view image displayed on the second head-mounted device when receiving the unworn state information.
According to the method described above, when the unworn state information indicating the fact that the first user is not wearing the first head-mounted device (first HMD) is received, the first avatar whose facial expression is set to the first mode is visualized on the field-of-view image displayed on the second head-mounted device (second HMD). In this manner, the second user can easily grasp the fact that the first user is not wearing the first HMD by visually recognizing the fact that the facial expression of the first avatar is a predetermined facial expression when communicating to/from the first user in the virtual space.
In particular, when the first user is not wearing the first HMD, the facial expression of the first avatar is not updated at all. Thus, the second user may feel strange about the first user when the second user does not grasp the fact that the first user is not wearing the first HMD. In this manner, it is possible to avoid a situation in which the second user feels strange about the first user who does not exhibit any reaction by allowing the second user to grasp the fact that the first user is not wearing the first HMD.
(4) The information processing method according to Item (3), in which the first mode is a default facial expression of the first avatar.
According to the method described above, when the unworn state information indicating the fact that the first user is not wearing the first head-mounted device (first HMD) is received, the facial expression of the first avatar becomes the default facial expression of the first avatar. In this manner, the second user can easily grasp the fact that the first user is not wearing the first HMD by visually recognizing the fact that the facial expression of the first avatar becomes the default facial expression when communicating to/from the first user in the virtual space.
(5) The information processing method according to Item (3), in which the first mode is a facial expression selected in advance as a mode indicating the fact that the first user is not wearing the first head-mounted device.
According to the method described above, when the unworn state information indicating the fact that the first user is not wearing the first head-mounted device (first HMD) is received, the facial expression of the first avatar is set to a facial expression (hereinafter referred to as “selected facial expression”) selected in advance by the first user as a mode indicating the fact that the first user is not wearing the first head-mounted device. In this manner, the second user can easily grasp the fact that the first user is not wearing the first HMD by visually recognizing the fact that the facial expression of the first avatar becomes the selected facial expression when communicating to/from the first user in the virtual space.
(6) The information processing method according to any one of Items (1) to (5), in which the first user terminal includes a wearing sensor configured to detect whether the first user is wearing the first head-mounted device, and in which, when the wearing sensor detects the fact that the first user is not wearing the first head-mounted device, the first user terminal outputs the unworn state information and the second user terminal receives the output unworn state information from the first user terminal.
According to the method described above, the fact that the first user is not wearing the first head-mounted device (first HMD) is identified based on the information transmitted from the wearing sensor of the first user terminal. After that, the unworn state information indicating the fact that the first user is not wearing the first HMD is transmitted. In this manner, it is possible to automatically identify the fact that the first user is not wearing the first HMD based on the information output from the wearing sensor of the first user terminal.
(7) The information processing method according to any one of Items (1) to (6), further including presenting, to the second user, the fact the first user is wearing the first head-mounted device with the second user terminal when receiving worn state information indicating the fact that the first user has put on the first HMD again after the first user removed the first head-mounted device.
According to the method described above, when the worn state information indicating the fact that the first user has put on the first head-mounted device (first HMD) again after the first user removed the first head-mounted device, the second terminal presents to the second user the fact that the first user is wearing the first HMD. In this manner, the second user can easily recognize the fact that the first user has put on the first HMD again. Therefore, it is possible to provide the user with a rich virtual experience.
(8) A program for executing the information processing method of any one of Items (1) to (7) on a computer.
According to the program described above, it is possible to provide the user with a rich virtual experience.
(9) A virtual space delivering system, which is configured to execute the information processing method of any one of Items (1) to (7), the virtual space delivering system including: a first user terminal including a first head-mounted device to be worn on a head of a first user; and a second user terminal including a second head-mounted device to be worn on a head of a second user.
According to the virtual space delivering system described above, it is possible to provide the user with a rich virtual experience.
(10) An apparatus, including a processor and a memory having stored thereon a computer-readable instruction, in which the apparatus is configured to execute the information processing method of any one of Items (1) to (7) when the computer-readable instruction is executed by the processor.
According to the apparatus described above, it is possible to provide the user with a rich virtual experience.
(11) A program for causing a computer for providing a user with a virtual experience to execute: moving an operation object in a virtual space in association with a motion of a part of a body of the user; performing, by the operation object, a first action on a virtual object whose motion in the virtual space is controllable based on a motion of another user different from the user; and performing a second action on the another user based on execution of the first action.
According to the program described above, it is possible to improve the virtual experience of the user. In particular, the plurality of users sharing the virtual space perform an action through an intuitive operation using an operation object, to thereby be able to smoothly communicate to/from one another in the virtual space without impairing the sense of immersion.
(12) The program according to Item (11), in which the second action is performed when the first action is performed under a state in which the virtual object is not controlled by the another user.
According to the program described above, the second action is performed on a virtual object that is not controlled at that time, to thereby be able to induce another user to participate in the virtual space.
(13) The program according to Item (11) or (12), in which the first action includes a motion of touching the virtual object in a process of a reciprocal motion of the operation object.
(14) The program according to Item (13), in which the first action includes a motion of touching the virtual object in the process of the reciprocal motion at an acceleration of a fixed value or more.
(15) The program according to Item (13) or (14), in which the first action includes a motion of passing a relative coordinate associated with the virtual object a predetermined number of times or more within a certain period of time.
According to those programs, it is possible to prevent erroneous detection of the motion of the operation object by not considering a case of the operation object having erroneously touched the virtual object.
(16) The program according to any one of Items (11) to (15), in which the second action includes notification for inducing the another user to participate in the virtual space.
According to the program described above, it is possible to allow the other users sharing the virtual space to easily recognize, for example, the fact that the other users are requested to participate in a multiplayer game.
(17) The program according to Item (16), in which the program causes the computer to further execute: transitioning the virtual object from a first state to a second state when reaction to the second action is exhibited; and transitioning the virtual object to a third state when the virtual object has entered a state of being controlled by the another user within a certain period of time after the transition to the second state.
According to the program described above, the user who has performed the first action can easily grasp the fact that another user has reacted or the virtual object has entered the state of being controlled by another user.
(18) The program according to Item (17), in which the program causes the computer to further execute returning the virtual object to the first state when the virtual object has not entered the state of being controlled by the another user within the certain period of time after the transition to the second state.
According to the program described above, the user who has performed the first action can easily grasp the fact that the virtual object has not entered the state of being controlled by another user in spite of execution of the second action.
(19) The program according to any one of Items (16) to (18), in which details of the notification are different depending on at least one of a strength of a motion of the first action performed on the virtual object or a type of the first action performed on the virtual object.
According to the method described above, another user can easily grasp the level of a request for participation in the multiplayer game, which has been given by the user who has performed the first action, and can determine whether to participate in the virtual space depending on the level of the request.
(20) The program according to any one of Items (11) to (19), in which the program causes the computer to further execute: performing the first action with the operation object on another virtual object whose motion in the virtual space is controllable by the computer; and transitioning a game provided in the virtual space from a first scene to a second scene based on execution of the first action.
According to the program described above, it is possible to progress the game without impairing the sense of immersion of the user into the virtual space.
(21) An information processing apparatus, which is configured to provide a user with a virtual experience, the information processing apparatus including a processor, in which the processor is configured to control the information processing apparatus to execute: moving an operation object in a virtual space in association with a motion of a part of a body of the user; performing a first action with the operation object on a virtual object whose motion in the virtual space is controllable based on a motion of another user different from the user; and performing a second action on the another user based on execution of the first action.
According to the information processing apparatus described above, it is possible to improve the virtual experience of the user.
(22) An information processing system, which is configured to provide a user with a virtual experience, the information processing system including a plurality of processors, in which each of the plurality of processors is configured to control the information processing system to execute: moving an operation object in a virtual space in association with a motion of a part of a body of the user; performing a first action with the operation object on a virtual object whose motion in the virtual space is controllable based on a motion of another user different from the user; and performing a second action on the another user based on execution of the first action.
According to the information processing system described above, it is possible to improve the virtual experience of the user.
(23) An information processing method to be executed on a computer to provide a user with a virtual experience, the information processing method including: moving an operation object in a virtual space in association with a motion of a part of a body of the user; performing a first action with the operation object on a virtual object whose motion in the virtual space is controllable based on a motion of another user different from the user; and performing a second action on the another user based on execution of the first action.
According to the information processing method described above, it is possible to improve the virtual experience of the user.
(24) A program for causing a computer for providing a user with a virtual space to execute: performing a first action in a virtual space; performing a second action on another user different from the user based on execution of the first action; and performing a third action in the virtual space based on reaction to the second action.
According to the program described above, it is possible to improve the virtual experience of the user. In particular, it is possible to provide a novel virtual experience that achieves a seamless boundary between the real space and the virtual space by performing the third action in the virtual space based on reaction to the second action performed on another user.
(25) The program according to Item (24), in which the first action is an action for identifying the another user, and the third action is an action performed by a virtual object associated with the another user in the virtual space.
According to the program described above, the plurality of users sharing the virtual space can smoothly communicate to/from one another in the virtual space by performing actions on one another.
(26) The program according to Item (24) or (25), in which the program causes the computer to further execute moving an operation object in the virtual space in association with a motion of a part of a body of the user, and in which the first action is performed through use of the operation object.
According to the program described above, the plurality of users sharing the virtual space can perform actions through an intuitive operation using operation objects, and the sense of immersion is not impaired.
(27) The program according to any one of Items (24) to (26), in which the second action includes notification for inducing the another user to participate in the virtual space.
According to the program described above, the other users sharing the virtual space can easily recognize, for example, the fact that participation in the multiplayer game is requested, and it is possible to promote usage of the multiplayer game.
(28) A program according to any one of Items (24) to (27), in which the third action includes transitioning an action performed by a virtual object associated with the another user in the virtual space from a first state to a second state.
According to the method described above, the user who has performed the first action can easily grasp whether or not another user has reacted.
(29) An information processing apparatus, which is configured to provide a user with a virtual experience, the information processing apparatus including a processor, in which the processor is configured to control the information processing apparatus to execute: performing a first action in a virtual space; performing a second action on another user different from the user based on execution of the first action; and performing a third action in the virtual space based on reaction to the second action.
According to the information processing apparatus described above, it is possible to improve the virtual experience of the user.
(30) An information processing system, which is configured to provide a user with a virtual experience, the information processing system including a plurality of processors, in which each of the plurality of processors is configured to control the information processing system to execute: performing a first action in a virtual space; performing a second action on another user different from the user based on execution of the first action; and performing a third action in the virtual space based on reaction to the second action.
According to the information processing apparatus described above, it is possible to improve the virtual experience of the user.
(31) An information processing method to be executed on a computer to provide a user with a virtual experience, the information processing method including: performing a first action in a virtual space; performing a second action on another user different from the user based on execution of the first action; and performing a third action in the virtual space based on reaction to the second action.
According to the information processing apparatus described above, it is possible to improve the virtual experience of the user.
DETAILED DESCRIPTION
Now, with reference to the drawings, embodiments of this technical idea are described in detail. In the following description, like components are denoted by like reference symbols. The same applies to the names and functions of those components. Therefore, detailed description of those components is not repeated. In one or more embodiments described in this disclosure, components of respective embodiments can be combined with each other, and the combination also serves as a part of the embodiments described in this disclosure.
[Configuration of HMD System]
With reference toFIG. 1, a configuration of a head-mounted device (HMD) system100is described.FIG. 1is a diagram of a system100including a head-mounted display (HMD) according to at least one embodiment of this disclosure. The system100is usable for household use or for professional use.
The system100includes a server600, HMD sets110A,110B,110C, and110D, an external device700, and a network2. Each of the HMD sets110A,110B,110C, and110D is capable of independently communicating to/from the server600or the external device700via the network2. In some instances, the HMD sets110A,110B,110C, and110D are also collectively referred to as “HMD set110”. The number of HMD sets110constructing the HMD system100is not limited to four, but may be three or less, or five or more. The HMD set110includes an HMD120, a computer200, an HMD sensor410, a display430, and a controller300. The HMD120includes a monitor130, an eye gaze sensor140, a first camera150, a second camera160, a microphone170, and a speaker180. In at least one embodiment, the controller300includes a motion sensor420.
In at least one aspect, the computer200is connected to the network2, for example, the Internet, and is able to communicate to/from the server600or other computers connected to the network2in a wired or wireless manner. Examples of the other computers include a computer of another HMD set110or the external device700. In at least one aspect, the HMD120includes a sensor190instead of the HMD sensor410. In at least one aspect, the HMD120includes both sensor190and the HMD sensor410.
The HMD120is wearable on a head of a user5to display a virtual space to the user5during operation. More specifically, in at least one embodiment, the HMD120displays each of a right-eye image and a left-eye image on the monitor130. Each eye of the user5is able to visually recognize a corresponding image from the right-eye image and the left-eye image so that the user5may recognize a three-dimensional image based on the parallax of both of the user's the eyes. In at least one embodiment, the HMD120includes any one of a so-called head-mounted display including a monitor or a head-mounted device capable of mounting a smartphone or other terminals including a monitor.
The monitor130is implemented as, for example, a non-transmissive display device. In at least one aspect, the monitor130is arranged on a main body of the HMD120so as to be positioned in front of both the eyes of the user5. Therefore, when the user5is able to visually recognize the three-dimensional image displayed by the monitor130, the user5is immersed in the virtual space. In at least one aspect, the virtual space includes, for example, a background, objects that are operable by the user5, or menu images that are selectable by the user5. In at least one aspect, the monitor130is implemented as a liquid crystal monitor or an organic electroluminescence (EL) monitor included in a so-called smartphone or other information display terminals.
In at least one aspect, the monitor130is implemented as a transmissive display device. In this case, the user5is able to see through the HMD120covering the eyes of the user5, for example, smartglasses. In at least one embodiment, the transmissive monitor130is configured as a temporarily non-transmissive display device through adjustment of a transmittance thereof. In at least one embodiment, the monitor130is configured to display a real space and a part of an image constructing the virtual space simultaneously. For example, in at least one embodiment, the monitor130displays an image of the real space captured by a camera mounted on the HMD120, or may enable recognition of the real space by setting the transmittance of a part the monitor130sufficiently high to permit the user5to see through the HMD120.
In at least one aspect, the monitor130includes a sub-monitor for displaying a right-eye image and a sub-monitor for displaying a left-eye image. In at least one aspect, the monitor130is configured to integrally display the right-eye image and the left-eye image. In this case, the monitor130includes a high-speed shutter. The high-speed shutter operates so as to alternately display the right-eye image to the right of the user5and the left-eye image to the left eye of the user5, so that only one of the user's5eyes is able to recognize the image at any single point in time.
In at least one aspect, the HMD120includes a plurality of light sources (not shown). Each light source is implemented by, for example, a light emitting diode (LED) configured to emit an infrared ray. The HMD sensor410has a position tracking function for detecting the motion of the HMD120. More specifically, the HMD sensor410reads a plurality of infrared rays emitted by the HMD120to detect the position and the inclination of the HMD120in the real space.
In at least one aspect, the HMD sensor410is implemented by a camera. In at least one aspect, the HMD sensor410uses image information of the HMD120output from the camera to execute image analysis processing, to thereby enable detection of the position and the inclination of the HMD120.
In at least one aspect, the HMD120includes the sensor190instead of, or in addition to, the HMD sensor410as a position detector. In at least one aspect, the HMD120uses the sensor190to detect the position and the inclination of the HMD120. For example, in at least one embodiment, when the sensor190is an angular velocity sensor, a geomagnetic sensor, or an acceleration sensor, the HMD120uses any or all of those sensors instead of (or in addition to) the HMD sensor410to detect the position and the inclination of the HMD120. As an example, when the sensor190is an angular velocity sensor, the angular velocity sensor detects over time the angular velocity about each of three axes of the HMD120in the real space. The HMD120calculates a temporal change of the angle about each of the three axes of the HMD120based on each angular velocity, and further calculates an inclination of the HMD120based on the temporal change of the angles.
The eye gaze sensor140detects a direction in which the lines of sight of the right eye and the left eye of the user5are directed. That is, the eye gaze sensor140detects the line of sight of the user5. The direction of the line of sight is detected by, for example, a known eye tracking function. The eye gaze sensor140is implemented by a sensor having the eye tracking function. In at least one aspect, the eye gaze sensor140includes a right-eye sensor and a left-eye sensor. In at least one embodiment, the eye gaze sensor140is, for example, a sensor configured to irradiate the right eye and the left eye of the user5with an infrared ray, and to receive reflection light from the cornea and the iris with respect to the irradiation light, to thereby detect a rotational angle of each of the user's5eyeballs. In at least one embodiment, the eye gaze sensor140detects the line of sight of the user5based on each detected rotational angle.
The first camera150photographs a lower part of a face of the user5. More specifically, the first camera150photographs, for example, the nose or mouth of the user5. The second camera160photographs, for example, the eyes and eyebrows of the user5. A side of a casing of the HMD120on the user5side is defined as an interior side of the HMD120, and a side of the casing of the HMD120on a side opposite to the user5side is defined as an exterior side of the HMD120. In at least one aspect, the first camera150is arranged on an exterior side of the HMD120, and the second camera160is arranged on an interior side of the HMD120. Images generated by the first camera150and the second camera160are input to the computer200. In at least one aspect, the first camera150and the second camera160are implemented as a single camera, and the face of the user5is photographed with this single camera.
The microphone170converts an utterance of the user5into a voice signal (electric signal) for output to the computer200. The speaker180converts the voice signal into a voice for output to the user5. In at least one embodiment, the speaker180converts other signals into audio information provided to the user5. In at least one aspect, the HMD120includes earphones in place of the speaker180.
The controller300is connected to the computer200through wired or wireless communication. The controller300receives input of a command from the user5to the computer200. In at least one aspect, the controller300is held by the user5. In at least one aspect, the controller300is mountable to the body or a part of the clothes of the user5. In at least one aspect, the controller300is configured to output at least any one of a vibration, a sound, or light based on the signal transmitted from the computer200. In at least one aspect, the controller300receives from the user5an operation for controlling the position and the motion of an object arranged in the virtual space.
In at least one aspect, the controller300includes a plurality of light sources. Each light source is implemented by, for example, an LED configured to emit an infrared ray. The HMD sensor410has a position tracking function. In this case, the HMD sensor410reads a plurality of infrared rays emitted by the controller300to detect the position and the inclination of the controller300in the real space. In at least one aspect, the HMD sensor410is implemented by a camera. In this case, the HMD sensor410uses image information of the controller300output from the camera to execute image analysis processing, to thereby enable detection of the position and the inclination of the controller300.
In at least one aspect, the motion sensor420is mountable on the hand of the user5to detect the motion of the hand of the user5. For example, the motion sensor420detects a rotational speed, a rotation angle, and the number of rotations of the hand. The detected signal is transmitted to the computer200. The motion sensor420is provided to, for example, the controller300. In at least one aspect, the motion sensor420is provided to, for example, the controller300capable of being held by the user5. In at least one aspect, to help prevent accidently release of the controller300in the real space, the controller300is mountable on an object like a glove-type object that does not easily fly away by being worn on a hand of the user5. In at least one aspect, a sensor that is not mountable on the user5detects the motion of the hand of the user5. For example, a signal of a camera that photographs the user5may be input to the computer200as a signal representing the motion of the user5. As at least one example, the motion sensor420and the computer200are connected to each other through wired or wireless communication. In the case of wireless communication, the communication mode is not particularly limited, and for example, Bluetooth (trademark) or other known communication methods are usable.
The display430displays an image similar to an image displayed on the monitor130. With this, a user other than the user5wearing the HMD120can also view an image similar to that of the user5. An image to be displayed on the display430is not required to be a three-dimensional image, but may be a right-eye image or a left-eye image. For example, a liquid crystal display or an organic EL monitor may be used as the display430.
In at least one embodiment, the server600transmits a program to the computer200. In at least one aspect, the server600communicates to/from another computer200for providing virtual reality to the HMD120used by another user. For example, when a plurality of users play a participatory game, for example, in an amusement facility, each computer200communicates to/from another computer200via the server600with a signal that is based on the motion of each user, to thereby enable the plurality of users to enjoy a common game in the same virtual space. Each computer200may communicate to/from another computer200with the signal that is based on the motion of each user without intervention of the server600.
The external device700is any suitable device as long as the external device700is capable of communicating to/from the computer200. The external device700is, for example, a device capable of communicating to/from the computer200via the network2, or is a device capable of directly communicating to/from the computer200by near field communication or wired communication. Peripheral devices such as a smart device, a personal computer (PC), or the computer200are usable as the external device700, in at least one embodiment, but the external device700is not limited thereto.
[Hardware Configuration of Computer]
With reference toFIG. 2, the computer200in at least one embodiment is described.FIG. 2is a block diagram of a hardware configuration of the computer200according to at least one embodiment. The computer200includes, a processor210, a memory220, a storage230, an input/output interface240, and a communication interface250. Each component is connected to a bus260. In at least one embodiment, at least one of the processor210, the memory220, the storage230, the input/output interface240or the communication interface250is part of a separate structure and communicates with other components of computer200through a communication path other than the bus260.
The processor210executes a series of commands included in a program stored in the memory220or the storage230based on a signal transmitted to the computer200or in response to a condition determined in advance. In at least one aspect, the processor210is implemented as a central processing unit (CPU), a graphics processing unit (GPU), a micro-processor unit (MPU), a field-programmable gate array (FPGA), or other devices.
The memory220temporarily stores programs and data. The programs are loaded from, for example, the storage230. The data includes data input to the computer200and data generated by the processor210. In at least one aspect, the memory220is implemented as a random access memory (RAM) or other volatile memories.
The storage230permanently stores programs and data. In at least one embodiment, the storage230stores programs and data for a period of time longer than the memory220, but not permanently. The storage230is implemented as, for example, a read-only memory (ROM), a hard disk device, a flash memory, or other non-volatile storage devices. The programs stored in the storage230include programs for providing a virtual space in the system100, simulation programs, game programs, user authentication programs, and programs for implementing communication to/from other computers200. The data stored in the storage230includes data and objects for defining the virtual space.
In at least one aspect, the storage230is implemented as a removable storage device like a memory card. In at least one aspect, a configuration that uses programs and data stored in an external storage device is used instead of the storage230built into the computer200. With such a configuration, for example, in a situation in which a plurality of HMD systems100are used, for example in an amusement facility, the programs and the data are collectively updated.
The input/output interface240allows communication of signals among the HMD120, the HMD sensor410, the motion sensor420, and the display430. The monitor130, the eye gaze sensor140, the first camera150, the second camera160, the microphone170, and the speaker180included in the HMD120may communicate to/from the computer200via the input/output interface240of the HMD120. In at least one aspect, the input/output interface240is implemented with use of a universal serial bus (USB), a digital visual interface (DVI), a high-definition multimedia interface (HDMI) (trademark), or other terminals. The input/output interface240is not limited to the specific examples described above.
In at least one aspect, the input/output interface240further communicates to/from the controller300. For example, the input/output interface240receives input of a signal output from the controller300and the motion sensor420. In at least one aspect, the input/output interface240transmits a command output from the processor210to the controller300. The command instructs the controller300to, for example, vibrate, output a sound, or emit light. When the controller300receives the command, the controller300executes any one of vibration, sound output, and light emission in accordance with the command.
The communication interface250is connected to the network2to communicate to/from other computers (e.g., server600) connected to the network2. In at least one aspect, the communication interface250is implemented as, for example, a local area network (LAN), other wired communication interfaces, wireless fidelity (Wi-Fi), Bluetooth®, near field communication (NFC), or other wireless communication interfaces. The communication interface250is not limited to the specific examples described above.
In at least one aspect, the processor210accesses the storage230and loads one or more programs stored in the storage230to the memory220to execute a series of commands included in the program. In at least one embodiment, the one or more programs includes an operating system of the computer200, an application program for providing a virtual space, and/or game software that is executable in the virtual space. The processor210transmits a signal for providing a virtual space to the HMD120via the input/output interface240. The HMD120displays a video on the monitor130based on the signal.
InFIG. 2, the computer200is outside of the HMD120, but in at least one aspect, the computer200is integral with the HMD120. As an example, a portable information communication terminal (e.g., smartphone) including the monitor130functions as the computer200in at least one embodiment.
In at least one embodiment, the computer200is used in common with a plurality of HMDs120. With such a configuration, for example, the computer200is able to provide the same virtual space to a plurality of users, and hence each user can enjoy the same application with other users in the same virtual space.
According to at least one embodiment of this disclosure, in the system100, a real coordinate system is set in advance. The real coordinate system is a coordinate system in the real space. The real coordinate system has three reference directions (axes) that are respectively parallel to a vertical direction, a horizontal direction orthogonal to the vertical direction, and a front-rear direction orthogonal to both of the vertical direction and the horizontal direction in the real space. The horizontal direction, the vertical direction (up-down direction), and the front-rear direction in the real coordinate system are defined as an x axis, a y axis, and a z axis, respectively. More specifically, the x axis of the real coordinate system is parallel to the horizontal direction of the real space, the y axis thereof is parallel to the vertical direction of the real space, and the z axis thereof is parallel to the front-rear direction of the real space.
In at least one aspect, the HMD sensor410includes an infrared sensor. When the infrared sensor detects the infrared ray emitted from each light source of the HMD120, the infrared sensor detects the presence of the HMD120. The HMD sensor410further detects the position and the inclination (direction) of the HMD120in the real space, which corresponds to the motion of the user5wearing the HMD120, based on the value of each point (each coordinate value in the real coordinate system). In more detail, the HMD sensor410is able to detect the temporal change of the position and the inclination of the HMD120with use of each value detected over time.
Each inclination of the HMD120detected by the HMD sensor410corresponds to an inclination about each of the three axes of the HMD120in the real coordinate system. The HMD sensor410sets a uvw visual-field coordinate system to the HMD120based on the inclination of the HMD120in the real coordinate system. The uvw visual-field coordinate system set to the HMD120corresponds to a point-of-view coordinate system used when the user5wearing the HMD120views an object in the virtual space.
[Uvw Visual-Field Coordinate System]
With reference toFIG. 3, the uvw visual-field coordinate system is described.FIG. 3is a diagram of a uvw visual-field coordinate system to be set for the HMD120according to at least one embodiment of this disclosure. The HMD sensor410detects the position and the inclination of the HMD120in the real coordinate system when the HMD120is activated. The processor210sets the uvw visual-field coordinate system to the HMD120based on the detected values.
InFIG. 3, the HMD120sets the three-dimensional uvw visual-field coordinate system defining the head of the user5wearing the HMD120as a center (origin). More specifically, the HMD120sets three directions newly obtained by inclining the horizontal direction, the vertical direction, and the front-rear direction (x axis, y axis, and z axis), which define the real coordinate system, about the respective axes by the inclinations about the respective axes of the HMD120in the real coordinate system, as a pitch axis (u axis), a yaw axis (v axis), and a roll axis (w axis) of the uvw visual-field coordinate system in the HMD120.
In at least one aspect, when the user5wearing the HMD120is standing (or sitting) upright and is visually recognizing the front side, the processor210sets the uvw visual-field coordinate system that is parallel to the real coordinate system to the HMD120. In this case, the horizontal direction (x axis), the vertical direction (y axis), and the front-rear direction (z axis) of the real coordinate system directly match the pitch axis (u axis), the yaw axis (v axis), and the roll axis (w axis) of the uvw visual-field coordinate system in the HMD120, respectively.
After the uvw visual-field coordinate system is set to the HMD120, the HMD sensor410is able to detect the inclination of the HMD120in the set uvw visual-field coordinate system based on the motion of the HMD120. In this case, the HMD sensor410detects, as the inclination of the HMD120, each of a pitch angle (θu), a yaw angle (θv), and a roll angle (θw) of the HMD120in the uvw visual-field coordinate system. The pitch angle (θu) represents an inclination angle of the HMD120about the pitch axis in the uvw visual-field coordinate system. The yaw angle (θv) represents an inclination angle of the HMD120about the yaw axis in the uvw visual-field coordinate system. The roll angle (θw) represents an inclination angle of the HMD120about the roll axis in the uvw visual-field coordinate system.
The HMD sensor410sets, to the HMD120, the uvw visual-field coordinate system of the HMD120obtained after the movement of the HMD120based on the detected inclination angle of the HMD120. The relationship between the HMD120and the uvw visual-field coordinate system of the HMD120is constant regardless of the position and the inclination of the HMD120. When the position and the inclination of the HMD120change, the position and the inclination of the uvw visual-field coordinate system of the HMD120in the real coordinate system change in synchronization with the change of the position and the inclination.
In at least one aspect, the HMD sensor410identifies the position of the HMD120in the real space as a position relative to the HMD sensor410based on the light intensity of the infrared ray or a relative positional relationship between a plurality of points (e.g., distance between points), which is acquired based on output from the infrared sensor. In at least one aspect, the processor210determines the origin of the uvw visual-field coordinate system of the HMD120in the real space (real coordinate system) based on the identified relative position.
[Virtual Space]
With reference toFIG. 4, the virtual space is further described.FIG. 4is a diagram of a mode of expressing a virtual space11according to at least one embodiment of this disclosure. The virtual space11has a structure with an entire celestial sphere shape covering a center12in all 360-degree directions. InFIG. 4, for the sake of clarity, only the upper-half celestial sphere of the virtual space11is included. Each mesh section is defined in the virtual space11. The position of each mesh section is defined in advance as coordinate values in an XYZ coordinate system, which is a global coordinate system defined in the virtual space11. The computer200associates each partial image forming a panorama image13(e.g., still image or moving image) that is developed in the virtual space11with each corresponding mesh section in the virtual space11.
In at least one aspect, in the virtual space11, the XYZ coordinate system having the center12as the origin is defined. The XYZ coordinate system is, for example, parallel to the real coordinate system. The horizontal direction, the vertical direction (up-down direction), and the front-rear direction of the XYZ coordinate system are defined as an X axis, a Y axis, and a Z axis, respectively. Thus, the X axis (horizontal direction) of the XYZ coordinate system is parallel to the x axis of the real coordinate system, the Y axis (vertical direction) of the XYZ coordinate system is parallel to the y axis of the real coordinate system, and the Z axis (front-rear direction) of the XYZ coordinate system is parallel to the z axis of the real coordinate system.
When the HMD120is activated, that is, when the HMD120is in an initial state, a virtual camera14is arranged at the center12of the virtual space11. In at least one embodiment, the virtual camera14is offset from the center12in the initial state. In at least one aspect, the processor210displays on the monitor130of the HMD120an image photographed by the virtual camera14. In synchronization with the motion of the HMD120in the real space, the virtual camera14similarly moves in the virtual space11. With this, the change in position and direction of the HMD120in the real space is reproduced similarly in the virtual space11.
The uvw visual-field coordinate system is defined in the virtual camera14similarly to the case of the HMD120. The uvw visual-field coordinate system of the virtual camera14in the virtual space11is defined to be synchronized with the uvw visual-field coordinate system of the HMD120in the real space (real coordinate system). Therefore, when the inclination of the HMD120changes, the inclination of the virtual camera14also changes in synchronization therewith. The virtual camera14can also move in the virtual space11in synchronization with the movement of the user5wearing the HMD120in the real space.
The processor210of the computer200defines a field-of-view region15in the virtual space11based on the position and inclination (reference line of sight16) of the virtual camera14. The field-of-view region15corresponds to, of the virtual space11, the region that is visually recognized by the user5wearing the HMD120. That is, the position of the virtual camera14determines a point of view of the user5in the virtual space11.
The line of sight of the user5detected by the eye gaze sensor140is a direction in the point-of-view coordinate system obtained when the user5visually recognizes an object. The uvw visual-field coordinate system of the HMD120is equal to the point-of-view coordinate system used when the user5visually recognizes the monitor130. The uvw visual-field coordinate system of the virtual camera14is synchronized with the uvw visual-field coordinate system of the HMD120. Therefore, in the system100in at least one aspect, the line of sight of the user5detected by the eye gaze sensor140can be regarded as the line of sight of the user5in the uvw visual-field coordinate system of the virtual camera14.
[User's Line of Sight] With reference toFIG. 5, determination of the line of sight of the user5is described.FIG. 5is a plan view diagram of the head of the user5wearing the HMD120according to at least one embodiment of this disclosure.
In at least one aspect, the eye gaze sensor140detects lines of sight of the right eye and the left eye of the user5. In at least one aspect, when the user5is looking at a near place, the eye gaze sensor140detects lines of sight R1and L1. In at least one aspect, when the user5is looking at a far place, the eye gaze sensor140detects lines of sight R2and L2. In this case, the angles formed by the lines of sight R2and L2with respect to the roll axis w are smaller than the angles formed by the lines of sight R1and L1with respect to the roll axis w. The eye gaze sensor140transmits the detection results to the computer200.
When the computer200receives the detection values of the lines of sight R1and L1from the eye gaze sensor140as the detection results of the lines of sight, the computer200identifies a point of gaze N1being an intersection of both the lines of sight R1and L1based on the detection values. Meanwhile, when the computer200receives the detection values of the lines of sight R2and L2from the eye gaze sensor140, the computer200identifies an intersection of both the lines of sight R2and L2as the point of gaze. The computer200identifies a line of sight NO of the user5based on the identified point of gaze N1. The computer200detects, for example, an extension direction of a straight line that passes through the point of gaze N1and a midpoint of a straight line connecting a right eye R and a left eye L of the user5to each other as the line of sight NO. The line of sight NO is a direction in which the user5actually directs his or her lines of sight with both eyes. The line of sight NO corresponds to a direction in which the user5actually directs his or her lines of sight with respect to the field-of-view region15.
In at least one aspect, the system100includes a television broadcast reception tuner. With such a configuration, the system100is able to display a television program in the virtual space11.
In at least one aspect, the HMD system100includes a communication circuit for connecting to the Internet or has a verbal communication function for connecting to a telephone line or a cellular service.
[Field-of-View Region]
With reference toFIG. 6andFIG. 7, the field-of-view region15is described.FIG. 6is a diagram of a YZ cross section obtained by viewing the field-of-view region15from an X direction in the virtual space11.FIG. 7is a diagram of an XZ cross section obtained by viewing the field-of-view region15from a Y direction in the virtual space11.
InFIG. 6, the field-of-view region15in the YZ cross section includes a region18. The region18is defined by the position of the virtual camera14, the reference line of sight16, and the YZ cross section of the virtual space11. The processor210defines a range of a polar angle α from the reference line of sight16serving as the center in the virtual space as the region18.
InFIG. 7, the field-of-view region15in the XZ cross section includes a region19. The region19is defined by the position of the virtual camera14, the reference line of sight16, and the XZ cross section of the virtual space11. The processor210defines a range of an azimuth β from the reference line of sight16serving as the center in the virtual space11as the region19. The polar angle α and β are determined in accordance with the position of the virtual camera14and the inclination (direction) of the virtual camera14.
In at least one aspect, the system100causes the monitor130to display a field-of-view image17based on the signal from the computer200, to thereby provide the field of view in the virtual space11to the user5. The field-of-view image17corresponds to apart of the panorama image13, which corresponds to the field-of-view region15. When the user5moves the HMD120worn on his or her head, the virtual camera14is also moved in synchronization with the movement. As a result, the position of the field-of-view region15in the virtual space11is changed. With this, the field-of-view image17displayed on the monitor130is updated to an image of the panorama image13, which is superimposed on the field-of-view region15synchronized with a direction in which the user5faces in the virtual space11. The user5can visually recognize a desired direction in the virtual space11.
In this way, the inclination of the virtual camera14corresponds to the line of sight of the user5(reference line of sight16) in the virtual space11, and the position at which the virtual camera14is arranged corresponds to the point of view of the user5in the virtual space11. Therefore, through the change of the position or inclination of the virtual camera14, the image to be displayed on the monitor130is updated, and the field of view of the user5is moved.
While the user5is wearing the HMD120(having a non-transmissive monitor130), the user5can visually recognize only the panorama image13developed in the virtual space11without visually recognizing the real world. Therefore, the system100provides a high sense of immersion in the virtual space11to the user5.
In at least one aspect, the processor210moves the virtual camera14in the virtual space11in synchronization with the movement in the real space of the user5wearing the HMD120. In this case, the processor210identifies an image region to be projected on the monitor130of the HMD120(field-of-view region15) based on the position and the direction of the virtual camera14in the virtual space11.
In at least one aspect, the virtual camera14includes two virtual cameras, that is, a virtual camera for providing a right-eye image and a virtual camera for providing a left-eye image. An appropriate parallax is set for the two virtual cameras so that the user5is able to recognize the three-dimensional virtual space11. In at least one aspect, the virtual camera14is implemented by a single virtual camera. In this case, a right-eye image and a left-eye image may be generated from an image acquired by the single virtual camera. In at least one embodiment, the virtual camera14is assumed to include two virtual cameras, and the roll axes of the two virtual cameras are synthesized so that the generated roll axis (w) is adapted to the roll axis (w) of the HMD120.
[Controller]
An example of the controller300is described with reference toFIG. 8AandFIG. 8B.FIG. 8Ais a diagram of a schematic configuration of a controller according to at least one embodiment of this disclosure.FIG. 8Bis a diagram of a coordinate system to be set for a hand of a user holding the controller according to at least one embodiment of this disclosure.
In at least one aspect, the controller300includes a right controller300R and a left controller (not shown). InFIG. 8Aonly right controller300R is shown for the sake of clarity. The right controller300R is operable by the right hand of the user5. The left controller is operable by the left hand of the user5. In at least one aspect, the right controller300R and the left controller are symmetrically configured as separate devices. Therefore, the user5can freely move his or her right hand holding the right controller300R and his or her left hand holding the left controller. In at least one aspect, the controller300may be an integrated controller configured to receive an operation performed by both the right and left hands of the user5. The right controller300R is now described.
The right controller300R includes a grip310, a frame320, and a top surface330. The grip310is configured so as to be held by the right hand of the user5. For example, the grip310may be held by the palm and three fingers (e.g., middle finger, ring finger, and small finger) of the right hand of the user5.
The grip310includes buttons340and350and the motion sensor420. The button340is arranged on a side surface of the grip310, and receives an operation performed by, for example, the middle finger of the right hand. The button350is arranged on a front surface of the grip310, and receives an operation performed by, for example, the index finger of the right hand. In at least one aspect, the buttons340and350are configured as trigger type buttons. The motion sensor420is built into the casing of the grip310. When a motion of the user5can be detected from the surroundings of the user5by a camera or other device. In at least one embodiment, the grip310does not include the motion sensor420.
The frame320includes a plurality of infrared LEDs360arranged in a circumferential direction of the frame320. The infrared LEDs360emit, during execution of a program using the controller300, infrared rays in accordance with progress of the program. The infrared rays emitted from the infrared LEDs360are usable to independently detect the position and the posture (inclination and direction) of each of the right controller300R and the left controller. InFIG. 8A, the infrared LEDs360are shown as being arranged in two rows, but the number of arrangement rows is not limited to that illustrated inFIG. 8. In at least one embodiment, the infrared LEDs360are arranged in one row or in three or more rows. In at least one embodiment, the infrared LEDs360are arranged in a pattern other than rows.
The top surface330includes buttons370and380and an analog stick390. The buttons370and380are configured as push type buttons. The buttons370and380receive an operation performed by the thumb of the right hand of the user5. In at least one aspect, the analog stick390receives an operation performed in any direction of 360 degrees from an initial position (neutral position). The operation includes, for example, an operation for moving an object arranged in the virtual space11.
In at least one aspect, each of the right controller300R and the left controller includes a battery for driving the infrared ray LEDs360and other members. The battery includes, for example, a rechargeable battery, a button battery, a dry battery, but the battery is not limited thereto. In at least one aspect, the right controller300R and the left controller are connectable to, for example, a USB interface of the computer200. In at least one embodiment, the right controller300R and the left controller do not include a battery.
InFIG. 8AandFIG. 8B, for example, a yaw direction, a roll direction, and a pitch direction are defined with respect to the right hand of the user5. A direction of an extended thumb is defined as the yaw direction, a direction of an extended index finger is defined as the roll direction, and a direction perpendicular to a plane is defined as the pitch direction.
[Hardware Configuration of Server]
With reference toFIG. 9, the server600in at least one embodiment is described.FIG. 9is a block diagram of a hardware configuration of the server600according to at least one embodiment of this disclosure. The server600includes a processor610, a memory620, a storage630, an input/output interface640, and a communication interface650. Each component is connected to a bus660. In at least one embodiment, at least one of the processor610, the memory620, the storage630, the input/output interface640or the communication interface650is part of a separate structure and communicates with other components of server600through a communication path other than the bus660.
The processor610executes a series of commands included in a program stored in the memory620or the storage630based on a signal transmitted to the server600or on satisfaction of a condition determined in advance. In at least one aspect, the processor610is implemented as a central processing unit (CPU), a graphics processing unit (GPU), a micro processing unit (MPU), a field-programmable gate array (FPGA), or other devices.
The memory620temporarily stores programs and data. The programs are loaded from, for example, the storage630. The data includes data input to the server600and data generated by the processor610. In at least one aspect, the memory620is implemented as a random access memory (RAM) or other volatile memories.
The storage630permanently stores programs and data. In at least one embodiment, the storage630stores programs and data for a period of time longer than the memory620, but not permanently. The storage630is implemented as, for example, a read-only memory (ROM), a hard disk device, a flash memory, or other non-volatile storage devices. The programs stored in the storage630include programs for providing a virtual space in the system100, simulation programs, game programs, user authentication programs, and programs for implementing communication to/from other computers200or servers600. The data stored in the storage630may include, for example, data and objects for defining the virtual space.
In at least one aspect, the storage630is implemented as a removable storage device like a memory card. In at least one aspect, a configuration that uses programs and data stored in an external storage device is used instead of the storage630built into the server600. With such a configuration, for example, in a situation in which a plurality of HMD systems100are used, for example, as in an amusement facility, the programs and the data are collectively updated.
The input/output interface640allows communication of signals to/from an input/output device. In at least one aspect, the input/output interface640is implemented with use of a USB, a DVI, an HDMI, or other terminals. The input/output interface640is not limited to the specific examples described above.
The communication interface650is connected to the network2to communicate to/from the computer200connected to the network2. In at least one aspect, the communication interface650is implemented as, for example, a LAN, other wired communication interfaces, Wi-Fi, Bluetooth, NFC, or other wireless communication interfaces. The communication interface650is not limited to the specific examples described above.
In at least one aspect, the processor610accesses the storage630and loads one or more programs stored in the storage630to the memory620to execute a series of commands included in the program. In at least one embodiment, the one or more programs include, for example, an operating system of the server600, an application program for providing a virtual space, and game software that can be executed in the virtual space. In at least one embodiment, the processor610transmits a signal for providing a virtual space to the HMD device110to the computer200via the input/output interface640.
[Control Device of HMD]
With reference toFIG. 10, the control device of the HMD120is described. According to at least one embodiment of this disclosure, the control device is implemented by the computer200having a known configuration.FIG. 10is a block diagram of the computer200according to at least one embodiment of this disclosure.FIG. 10includes a module configuration of the computer200.
InFIG. 10, the computer200includes a control module510, a rendering module520, a memory module530, and a communication control module540. In at least one aspect, the control module510and the rendering module520are implemented by the processor210. In at least one aspect, a plurality of processors210function as the control module510and the rendering module520. The memory module530is implemented by the memory220or the storage230. The communication control module540is implemented by the communication interface250.
The control module510controls the virtual space11provided to the user5. The control module510defines the virtual space11in the HMD system100using virtual space data representing the virtual space11. The virtual space data is stored in, for example, the memory module530. In at least one embodiment, the control module510generates virtual space data. In at least one embodiment, the control module510acquires virtual space data from, for example, the server600.
The control module510arranges objects in the virtual space11using object data representing objects. The object data is stored in, for example, the memory module530. In at least one embodiment, the control module510generates virtual space data. In at least one embodiment, the control module510acquires virtual space data from, for example, the server600. In at least one embodiment, the objects include, for example, an avatar object of the user5, character objects, operation objects, for example, a virtual hand to be operated by the controller300, and forests, mountains, other landscapes, streetscapes, or animals to be arranged in accordance with the progression of the story of the game.
The control module510arranges an avatar object of the user5of another computer200, which is connected via the network2, in the virtual space11. In at least one aspect, the control module510arranges an avatar object of the user5in the virtual space11. In at least one aspect, the control module510arranges an avatar object simulating the user5in the virtual space11based on an image including the user5. In at least one aspect, the control module510arranges an avatar object in the virtual space11, which is selected by the user5from among a plurality of types of avatar objects (e.g., objects simulating animals or objects of deformed humans).
The control module510identifies an inclination of the HMD120based on output of the HMD sensor410. In at least one aspect, the control module510identifies an inclination of the HMD120based on output of the sensor190functioning as a motion sensor. The control module510detects parts (e.g., mouth, eyes, and eyebrows) forming the face of the user5from a face image of the user5generated by the first camera150and the second camera160. The control module510detects a motion (shape) of each detected part.
The control module510detects a line of sight of the user5in the virtual space11based on a signal from the eye gaze sensor140. The control module510detects a point-of-view position (coordinate values in the XYZ coordinate system) at which the detected line of sight of the user5and the celestial sphere of the virtual space11intersect with each other. More specifically, the control module510detects the point-of-view position based on the line of sight of the user5defined in the uvw coordinate system and the position and the inclination of the virtual camera14. The control module510transmits the detected point-of-view position to the server600. In at least one aspect, the control module510is configured to transmit line-of-sight information representing the line of sight of the user5to the server600. In such a case, the control module510may calculate the point-of-view position based on the line-of-sight information received by the server600.
The control module510translates a motion of the HMD120, which is detected by the HMD sensor410, in an avatar object. For example, the control module510detects inclination of the HMD120, and arranges the avatar object in an inclined manner. The control module510translates the detected motion of face parts in a face of the avatar object arranged in the virtual space11. The control module510receives line-of-sight information of another user5from the server600, and translates the line-of-sight information in the line of sight of the avatar object of another user5. In at least one aspect, the control module510translates a motion of the controller300in an avatar object and an operation object. In this case, the controller300includes, for example, a motion sensor, an acceleration sensor, or a plurality of light emitting elements (e.g., infrared LEDs) for detecting a motion of the controller300.
The control module510arranges, in the virtual space11, an operation object for receiving an operation by the user5in the virtual space11. The user5operates the operation object to, for example, operate an object arranged in the virtual space11. In at least one aspect, the operation object includes, for example, a hand object serving as a virtual hand corresponding to a hand of the user5. In at least one aspect, the control module510moves the hand object in the virtual space11so that the hand object moves in association with a motion of the hand of the user5in the real space based on output of the motion sensor420. In at least one aspect, the operation object may correspond to a hand part of an avatar object.
When one object arranged in the virtual space11collides with another object, the control module510detects the collision. The control module510is able to detect, for example, a timing at which a collision area of one object and a collision area of another object have touched with each other, and performs predetermined processing in response to the detected timing. In at least one embodiment, the control module510detects a timing at which an object and another object, which have been in contact with each other, have moved away from each other, and performs predetermined processing in response to the detected timing. In at least one embodiment, the control module510detects a state in which an object and another object are in contact with each other. For example, when an operation object touches another object, the control module510detects the fact that the operation object has touched the other object, and performs predetermined processing.
In at least one aspect, the control module510controls image display of the HMD120on the monitor130. For example, the control module510arranges the virtual camera14in the virtual space11. The control module510controls the position of the virtual camera14and the inclination (direction) of the virtual camera14in the virtual space11. The control module510defines the field-of-view region15depending on an inclination of the head of the user5wearing the HMD120and the position of the virtual camera14. The rendering module520generates the field-of-view region17to be displayed on the monitor130based on the determined field-of-view region15. The communication control module540outputs the field-of-view region17generated by the rendering module520to the HMD120.
The control module510, which has detected an utterance of the user5using the microphone170from the HMD120, identifies the computer200to which voice data corresponding to the utterance is to be transmitted. The voice data is transmitted to the computer200identified by the control module510. The control module510, which has received voice data from the computer200of another user via the network2, outputs audio information (utterances) corresponding to the voice data from the speaker180.
The memory module530holds data to be used to provide the virtual space11to the user5by the computer200. In at least one aspect, the memory module530stores space information, object information, and user information.
The space information stores one or more templates defined to provide the virtual space11.
The object information stores a plurality of panorama images13forming the virtual space11and object data for arranging objects in the virtual space11. In at least one embodiment, the panorama image13contains a still image and/or a moving image. In at least one embodiment, the panorama image13contains an image in a non-real space and/or an image in the real space. An example of the image in a non-real space is an image generated by computer graphics.
The user information stores a user ID for identifying the user5. The user ID is, for example, an internet protocol (IP) address or a media access control (MAC) address set to the computer200used by the user. In at least one aspect, the user ID is set by the user. The user information stores, for example, a program for causing the computer200to function as the control device of the HMD system100.
The data and programs stored in the memory module530are input by the user5of the HMD120. Alternatively, the processor210downloads the programs or data from a computer (e.g., server600) that is managed by a business operator providing the content, and stores the downloaded programs or data in the memory module530.
In at least one embodiment, the communication control module540communicates to/from the server600or other information communication devices via the network2.
In at least one aspect, the control module510and the rendering module520are implemented with use of, for example, Unity® provided by Unity Technologies. In at least one aspect, the control module510and the rendering module520are implemented by combining the circuit elements for implementing each step of processing.
The processing performed in the computer200is implemented by hardware and software executed by the processor410. In at least one embodiment, the software is stored in advance on a hard disk or other memory module530. In at least one embodiment, the software is stored on a CD-ROM or other computer-readable non-volatile data recording media, and distributed as a program product. In at least one embodiment, the software may is provided as a program product that is downloadable by an information provider connected to the Internet or other networks. Such software is read from the data recording medium by an optical disc drive device or other data reading devices, or is downloaded from the server600or other computers via the communication control module540and then temporarily stored in a storage module. The software is read from the storage module by the processor210, and is stored in a RAM in a format of an executable program. The processor210executes the program.
[Control Structure of HMD System]
With reference toFIG. 11, the control structure of the HMD set110is described.FIG. 11is a sequence chart of processing to be executed by the system100according to at least one embodiment of this disclosure.
InFIG. 11, in Step S1110, the processor210of the computer200serves as the control module510to identify virtual space data and define the virtual space11.
In Step S1120, the processor210initializes the virtual camera14. For example, in a work area of the memory, the processor210arranges the virtual camera14at the center12defined in advance in the virtual space11, and matches the line of sight of the virtual camera14with the direction in which the user5faces.
In Step S1130, the processor210serves as the rendering module520to generate field-of-view image data for displaying an initial field-of-view image. The generated field-of-view image data is output to the HMD120by the communication control module540.
In Step S1132, the monitor130of the HMD120displays the field-of-view image based on the field-of-view image data received from the computer200. The user5wearing the HMD120is able to recognize the virtual space11through visual recognition of the field-of-view image.
In Step S1134, the HMD sensor410detects the position and the inclination of the HMD120based on a plurality of infrared rays emitted from the HMD120. The detection results are output to the computer200as motion detection data.
In Step S1140, the processor210identifies a field-of-view direction of the user5wearing the HMD120based on the position and inclination contained in the motion detection data of the HMD120.
In Step S1150, the processor210executes an application program, and arranges an object in the virtual space11based on a command contained in the application program.
In Step S1160, the controller300detects an operation by the user5based on a signal output from the motion sensor420, and outputs detection data representing the detected operation to the computer200. In at least one aspect, an operation of the controller300by the user5is detected based on an image from a camera arranged around the user5.
In Step S1170, the processor210detects an operation of the controller300by the user5based on the detection data acquired from the controller300.
In Step S1180, the processor210generates field-of-view image data based on the operation of the controller300by the user5. The communication control module540outputs the generated field-of-view image data to the HMD120.
In Step S1190, the HMD120updates a field-of-view image based on the received field-of-view image data, and displays the updated field-of-view image on the monitor130.
[Avatar Object]
With reference toFIG. 12AandFIG. 12B, an avatar object according to at least one embodiment is described.FIG. 12andFIG. 12Bare diagrams of avatar objects of respective users5of the HMD sets110A and110B. In the following, the user of the HMD set110A, the user of the HMD set110B, the user of the HMD set110C, and the user of the HMD set110D are referred to as “user5A”, “user5B”, “user5C”, and “user5D”, respectively. A reference numeral of each component related to the HMD set110A, a reference numeral of each component related to the HMD set110B, a reference numeral of each component related to the HMD set110C, and a reference numeral of each component related to the HMD set110D are appended by A, B, C, and D, respectively. For example, the HMD120A is included in the HMD set110A.
FIG. 12Ais a schematic diagram of HMD systems of several users sharing the virtual space interact using a network according to at least one embodiment of this disclosure. Each HMD120provides the user5with the virtual space11. Computers200A to200D provide the users5A to5D with virtual spaces11A to11D via HMDs120A to120D, respectively. InFIG. 12A, the virtual space11A and the virtual space11B are formed by the same data. In other words, the computer200A and the computer200B share the same virtual space. An avatar object6A of the user5A and an avatar object6B of the user5B are present in the virtual space11A and the virtual space11B. The avatar object6A in the virtual space11A and the avatar object6B in the virtual space11B each wear the HMD120. However, the inclusion of the HMD120A and HMD120B is only for the sake of simplicity of description, and the avatars do not wear the HMD120A and HMD120B in the virtual spaces11A and11B, respectively.
In at least one aspect, the processor210A arranges a virtual camera14A for photographing a field-of-view region17A of the user5A at the position of eyes of the avatar object6A.
FIG. 12Bis a diagram of a field of view of a HMD according to at least one embodiment of this disclosure.FIG. 12(B) corresponds to the field-of-view region17A of the user5A inFIG. 12A. The field-of-view region17A is an image displayed on a monitor130A of the HMD120A. This field-of-view region17A is an image generated by the virtual camera14A. The avatar object6B of the user5B is displayed in the field-of-view region17A. Although not included inFIG. 12B, the avatar object6A of the user5A is displayed in the field-of-view image of the user5B.
In the arrangement inFIG. 12B, the user5A can communicate to/from the user5B via the virtual space11A through conversation. More specifically, voices of the user5A acquired by a microphone170A are transmitted to the HMD120B of the user5B via the server600and output from a speaker180B provided on the HMD120B. Voices of the user5B are transmitted to the HMD120A of the user5A via the server600, and output from a speaker180A provided on the HMD120A.
The processor210A translates an operation by the user5B (operation of HMD120B and operation of controller300B) in the avatar object6B arranged in the virtual space11A. With this, the user5A is able to recognize the operation by the user5B through the avatar object6B.
FIG. 13is a sequence chart of processing to be executed by the system100according to at least one embodiment of this disclosure. InFIG. 13, although the HMD set110D is not included, the HMD set110D operates in a similar manner as the HMD sets110A,110B, and110C. Also in the following description, a reference numeral of each component related to the HMD set110A, a reference numeral of each component related to the HMD set110B, a reference numeral of each component related to the HMD set110C, and a reference numeral of each component related to the HMD set110D are appended by A, B, C, and D, respectively.
In Step S1310A, the processor210A of the HMD set110A acquires avatar information for determining a motion of the avatar object6A in the virtual space11A. This avatar information contains information on an avatar such as motion information, face tracking data, and sound data. The motion information contains, for example, information on a temporal change in position and inclination of the HMD120A and information on a motion of the hand of the user5A, which is detected by, for example, a motion sensor420A. An example of the face tracking data is data identifying the position and size of each part of the face of the user5A. Another example of the face tracking data is data representing motions of parts forming the face of the user5A and line-of-sight data. An example of the sound data is data representing sounds of the user5A acquired by the microphone170A of the HMD120A. In at least one embodiment, the avatar information contains information identifying the avatar object6A or the user5A associated with the avatar object6A or information identifying the virtual space11A accommodating the avatar object6A. An example of the information identifying the avatar object6A or the user5A is a user ID. An example of the information identifying the virtual space11A accommodating the avatar object6A is a room ID. The processor210A transmits the avatar information acquired as described above to the server600via the network2.
In Step S1310B, the processor210B of the HMD set110B acquires avatar information for determining a motion of the avatar object6B in the virtual space11B, and transmits the avatar information to the server600, similarly to the processing of Step S1310A. Similarly, in Step S1310C, the processor210C of the HMD set110C acquires avatar information for determining a motion of the avatar object6C in the virtual space11C, and transmits the avatar information to the server600.
In Step S1320, the server600temporarily stores pieces of player information received from the HMD set110A, the HMD set110B, and the HMD set110C, respectively. The server600integrates pieces of avatar information of all the users (in this example, users5A to5C) associated with the common virtual space11based on, for example, the user IDs and room IDs contained in respective pieces of avatar information. Then, the server600transmits the integrated pieces of avatar information to all the users associated with the virtual space11at a timing determined in advance. In this manner, synchronization processing is executed. Such synchronization processing enables the HMD set110A, the HMD set110B, and the HMD120C to share mutual avatar information at substantially the same timing.
Next, the HMD sets110A to110C execute processing of Step S1330A to Step S1330C, respectively, based on the integrated pieces of avatar information transmitted from the server600to the HMD sets110A to110C. The processing of Step S1330A corresponds to the processing of Step S1180ofFIG. 11.
In Step S1330A, the processor210A of the HMD set110A updates information on the avatar object6B and the avatar object6C of the other users5B and5C in the virtual space11A. Specifically, the processor210A updates, for example, the position and direction of the avatar object6B in the virtual space11based on motion information contained in the avatar information transmitted from the HMD set110B. For example, the processor210A updates the information (e.g., position and direction) on the avatar object6B contained in the object information stored in the memory module530. Similarly, the processor210A updates the information (e.g., position and direction) on the avatar object6C in the virtual space11based on motion information contained in the avatar information transmitted from the HMD set110C.
In Step S1330B, similarly to the processing of Step S1330A, the processor210B of the HMD set110B updates information on the avatar object6A and the avatar object6C of the users5A and5C in the virtual space11B. Similarly, in Step S1330C, the processor210C of the HMD set110C updates information on the avatar object6A and the avatar object6B of the users5A and5B in the virtual space11C.
FIG. 14is a diagram of a configuration of the HMD set110according to at least one embodiment of this disclosure. InFIG. 14, the HMD set110further includes a wearing sensor195in addition to the above-mentioned components. The wearing sensor195is configured to generate event information for indicating whether or not the user5is wearing the HMD120. Specifically, when the user5has removed the HMD120(when state of HMD120has transitioned from worn state to unworn state), the wearing sensor195generates and transmits event information to the computer200. When the user5has put on the HMD120(when state of HMD120has transitioned from unworn state to worn state), the wearing sensor195generates event information and transmits the event information to the computer200. The configuration of the wearing sensor195is not particularly limited. For example, in at least one embodiment, the elastic force of a spring installed in the HMD120changes when the user5has put on the HMD120. In this case, the wearing sensor195may generate event information based on the change in elastic force of this spring. Alternatively, in at least one embodiment, a current flowing between two pads to be in contact with a part (nose) of the body and installed in the HMD120changes when the user5has put on the HMD120. In this case, the wearing sensor195may generate event information based on the change in current flowing between those two pads.
Next, an example of processing of synchronizing motions of the avatar objects6A and6B between the HMD set110A and the HMD set110B is described with reference toFIG. 1,FIG. 15A,FIG. 15B, andFIG. 16.FIG. 15Ais a diagram of the virtual space11A to be provided to the user5A according to at least one embodiment of this disclosure.FIG. 15Bis a diagram of the virtual space11B to be provided to the user5B according to at least one embodiment of this disclosure.FIG. 16is a sequence diagram of processing of synchronizing motions of the avatar objects6A and6B between the HMD set110A and the HMD set110B according to at least one embodiment of this disclosure. In at least one embodiment of this description, inFIG. 15AandFIG. 15B, the avatar object6A (first avatar) associated with the HMD set110A (user5A) and the avatar object6B (second avatar) associated with the HMD set110B (user5B) share the same virtual space. In other words, the user5A and the user5B share one virtual space via the network2. In at least one embodiment of this disclosure, the avatar object has the same meaning as the avatar.
InFIG. 15A, the virtual space11A of the user5A includes the avatar object6A and the avatar object6B. The avatar object6A is operated by the user5A and moves in association with a motion of the user5A. The avatar object6A includes: a left hand (part of virtual body of avatar object6A), which moves in association with a motion (motion of left hand of user5A) of a controller300L of the HMD set110A; a right hand, which moves in association with a motion (motion of right hand of user5A) of a controller300R of the HMD set110A; and a face, whose facial expression moves in association with a facial expression of the user5A. The face of the avatar object6A includes a plurality of face parts (e.g., eyes, eyebrows, and mouth). The avatar object6B is operated by the user5B and moves in association with a motion of the user5B. The avatar object6B includes: a left hand, which moves in association with a motion of a controller300L of the HMD set110B indicating a motion of a left hand of the user5B; a right hand, which moves in association with a motion of a controller300R of the HMD set110B indicating a motion of a right hand of the user5B; and a face, whose facial expression moves in association with a facial expression of the user5B. The face of the avatar object6B includes a plurality of face parts (e.g., eyes, eyebrows, and mouth).
In at least one embodiment, the avatar object6A is not visible in the virtual space11A provided to the user5A. In this case, the avatar object6A arranged in the virtual space11A includes at least the virtual camera14that moves in association with the motion of the HMD120of the HMD set110A.
The positions of the avatar objects6A and6B may also be identified based on the positions of the HMDs120of the HMD sets110A and110B, respectively. Similarly, the directions of the faces of the avatar objects6A and6B may also be identified based on the inclinations of the HMDs120of the HMD sets110A and110B, respectively. The motions of the hands of the avatar objects6A and6B may also be identified based on motions of the controllers300of the HMD sets110A and110B, respectively. In particular, the motions of the left hands of the avatar objects6A and6B may be identified based on the motions of the controllers300L of the HMD sets110A and110B, respectively, and the motions of the right hands of the avatar objects6A and6B may be identified based on the motions of the controllers300R of the HMD sets110A and110B, respectively. The facial expressions of the avatar objects6A and6B may also be identified based on the facial expressions (states) of the users5A and5B, respectively.
Corresponding virtual cameras14(FIG. 4) may be arranged at the eye of each of the avatar objects6A and6B. In particular, a left eye virtual camera may be arranged at the left eye of each of the avatar objects6A and6B, and a right eye virtual camera may be arranged at the right eye of each of the avatar objects6A and6B. In at least one embodiment, corresponding virtual cameras14are arranged at the eye of each of the avatar objects6A and6B. As a result, in at least one embodiment, the field-of-view region15of the avatar object6A matches the field-of-view region15of the virtual camera14arranged at the avatar object6A (seeFIG. 4). Similarly, in at least one embodiment, the field-of-view region15of the avatar object6B matches the field-of-view region15of the virtual camera14arranged at the avatar object6B (seeFIG. 4).
InFIG. 15B, the virtual space11B of the user5B includes the avatar object6A and the avatar object6B. The position of each of the avatar objects6A and6B in the virtual space11A may correspond to the position of each of the avatar objects6A and6B in the virtual space11B.
In at least one embodiment, the avatar object6B is not visible in the virtual space11B provided to the user5B. In this case, the avatar object6B arranged in the virtual space11B includes at least the virtual camera14that moves in association with the motion of the HMD120of the HMD set110B.
Next, inFIG. 16, in Step S1610, the processor210of the HMD set110A generates voice data on the user5A. For example, when the user5A has input a voice into the microphone170(voice input device) of the HMD set110A, the microphone170generates voice data representing the input voice. After that, the microphone170transmits the generated voice data to the processor210via the input/output interface240.
Next, in Step S1611, the processor210of the HMD set110A generates control information on the avatar object6A, and then transmits the generated control information on the avatar object6A and the voice data representing the voice of the user5A (voice data on user5A) to the server600. After that, the processor610of the server600receives the control information on the avatar object6A and the voice data on the user5A from the HMD set110A (Step S1612).
In this case, the control information on the avatar object6A is information required for controlling the motion of the avatar object6A. The control information on the avatar object6A may contain information (position information) on a position of the avatar object6A, information (face direction position) on a direction of the face of the avatar object6A, information (hand information) on states of the hands (left hand and right hand) of the avatar object6A, and information (face information) on the facial expression of the avatar object6A.
The face information on the avatar object6A contains information on states of the plurality of face parts. The information on states of the plurality of face parts contains information (eye information) on states of the eyes (e.g., shape of sclera and position of pupil and iris with respect to sclera) of the avatar object6A, information (eyebrow information) on information of the eyebrows (e.g., positions and shapes of eyebrows) of the avatar object6A, and information (e.g., mouth information) on states of the mouth (e.g., position and shape of mouth) of the avatar object6A.
More specifically, the eye information on the avatar object6A contains information on a state of a left eye of the avatar object6A and information on a state of a right eye of the avatar object6A. The eyebrow information on the avatar object6A contains information on a state of a left eyebrow of the avatar object6A and information on a state of a right eyebrow of the avatar object6A.
The processor210of the HMD set110A acquires an image representing the eyes (left eye and right eye) and eyebrows (left eyebrow and right eyebrow) of the user5A from the second camera160mounted on the HMD120, and identifies the states of the eyes (left eye and right eye) and eyebrows (left eyebrow and right eyebrow) of the user5A based on the acquired image and a predetermined image processing algorithm. Next, the processor210generates information on the states of the eyes (left eye and right eye) of the avatar object6A and information on the states of the eyebrows (left eyebrow and right eyebrow) of the avatar object6A based on the identified states of the eyes and eyebrows of the user5A.
Similarly, the processor210of the HMD set110A acquires an image representing the mouth and surroundings of the mouth of the user5A from the first camera150, and identifies the state of the mouth of the user5A based on the acquired image and a predetermined image processing algorithm. Next, the processor210generates information on the state of the mouth of the avatar object6A based on the identified state of the mouth of the user5A.
In this manner, the processor210of the HMD set110A can generate the eye information on the avatar object6A corresponding to the eyes of the user5A, the eyebrow information on the avatar object6A corresponding to the eyebrows of the user5A, and the mouth information on the avatar object6A corresponding to the mouth of the user5A.
The processor210may identify the facial expression (e.g., smile) of the user5A from among a plurality of types of facial expressions (e.g., smile, sorrow, poker face, anger, surprise, and confusion) stored in the storage230or the memory220based on an image photographed by the second camera160, an image photographed by the first camera150, and a predetermined image processing algorithm. After that, the processor210can generate face information on the facial expression (e.g., smile) of the avatar object6A based on the identified facial expression of the user5A. In this case, the storage230may store facial expression data containing the plurality of types of facial expressions of the avatar object6A and a plurality of pieces of face information on the avatar object6A associated with the plurality of types of facial expressions, respectively.
For example, when the processor210identifies the facial expression of the avatar object6A as a smile, the processor210acquires the face information representing the smile of the avatar object6A based on the facial expression (smile) and facial expression data on the face of the avatar object6A. The facial expression data representing the smile of the avatar object6A contains the eye information, eyebrow information, and mouth information on the avatar object6A at a time when the facial expression of the avatar object6A is a smile.
Next, in Step S1613, the processor210of the HMD set110B generates control information on the avatar object6B, and then transmits the generated control information on the avatar object6B to the server600. After that, the processor610of the server600receives the control information on the avatar object6B from the HMD set110B (Step S1614).
In this case, the control information on the avatar object6B is information required for controlling the motion of the avatar object6B. The control information on the avatar object6B may contain information (position information) on a position of the avatar object6B, information (face direction position) on a direction of the face of the avatar object6B, information (hand information) on states of the hands (left hand and right hand) of the avatar object6B, and information (face information) on the facial expression of the avatar object6B.
The face information on the avatar object6B contains information on states of the plurality of face parts. The information on states of the plurality of face parts contains information (eye information) on states of the eyes (e.g., shape of sclera and position of pupil and iris with respect to sclera) of the avatar object6B, information (eyebrow information) on information of the eyebrows (e.g., positions and shapes of eyebrows) of the avatar object6B, and information (e.g., mouth information) on states of the mouth (e.g., position and shape of mouth) of the avatar object6B. The method of acquiring the face information on the avatar object6B is similar to the method of acquiring the face information on the avatar object6A.
Next, the server600transmits control information on the avatar object6B to the HMD set110A (Step S1615), and transmits the control information on the avatar object6A and the voice data on the user5A to the HMD set110B (Step S1619). After that, in Step S1616, the processor210of the HMD set110A receives the control information on the avatar object6B, and then updates the states of the avatar objects6A and6B to update the virtual space data representing the virtual space11A (seeFIG. 15A) based on the control information on the avatar objects6A and6B (Step S1617).
Specifically, the processor210of the HMD set110A updates the positions of the avatar objects6A and6B based on the position information on the avatar objects6A and6B. The processor210updates the directions of the faces of the avatar objects6A and6B based on face direction information on the avatar objects6A and6B. The processor210updates the hands of the avatar objects6A and6B based on the hand information on the avatar objects6A and6B. The processor210updates the facial expressions of the avatar objects6A and6B based on the face information on the avatar objects6A and6B. In this manner, the virtual space data representing the virtual space11A including the updated avatar objects6A and6B is updated.
After that, the processor210of the HMD set110A identifies the field-of-view region15of the avatar object6A (virtual camera14) based on the position and inclination of the HMD120, and then updates the field-of-view image displayed on the HMD120based on the updated virtual space data and the field-of-view region15of the avatar object6A (Step S1618).
Meanwhile, in Step S1620, the processor210of the HMD set110B receives the control information on the avatar object6A and the voice data on the user5A, and then updates the states of the avatar objects6A and6B to update the virtual space data representing the virtual space11B (seeFIG. 15B) based on the control information on the avatar objects6A and6B (Step S1621).
Specifically, the processor210of the HMD set110B updates the positions of the avatar objects6A and6B based on the position information on the avatar objects6A and6B. The processor210updates the directions of the faces of the avatar objects6A and6B based on face direction information on the avatar objects6A and6B. The processor210updates the hands of the avatar objects6A and6B based on the hand information on the avatar objects6A and6B. The processor210updates the facial expressions of the avatar objects6A and6B based on the face information on the avatar objects6A and6B. In this manner, virtual space data representing the virtual space including the updated avatar objects6A and6B is updated.
After that, the processor210of the HMD set110B identifies the field-of-view region15of the avatar object6B (virtual camera14) based on the position and inclination of the HMD120, and then updates the field-of-view image displayed on the HMD120based on the updated virtual space data and the field-of-view region15of the avatar object6B (Step S1622).
After that, the processor210of the HMD set110B processes the voice data on the user5A based on the received voice data on the user5A, the information on the position of the avatar object6A included in the control information on the avatar object6A, the information on the position of the avatar object6B, and a predetermined voice processing algorithm. After that, the processor210transmits the processed voice data to the speaker180(voice output device), and the speaker180outputs the voice of the user5A based on the processed voice data (Step S1623). In this way, a voice chat (VR chat) can be implemented between users (between avatars) in the virtual space.
In at least one embodiment of this disclosure, after the HMD sets110A and110B have transmitted the control information on the avatar object6A and the control information on the avatar object6B, respectively, to the server600, the server600transmits the control information on the avatar object6A to the HMD set110B, and transmits the control information on the avatar object6B to the HMD set110A. In this way, the motion of each of the avatar objects6A and6B can be synchronized between the HMD set110A and the HMD set110B.
Next, referring mainly toFIG. 17andFIG. 19, a description is given of an information processing method in a case where the user5A has removed the HMD120according to at least one embodiment of this disclosure.FIG. 17is a flowchart of an information processing method in a case where the user5A has removed the HMD120according to at least one embodiment of this disclosure.FIG. 19is a diagram of an example of the virtual space11B to be provided to the user5B, which is used to describe the information processing method according to at least one embodiment of this disclosure. In the flowchart ofFIG. 17, processing of the processor210of the HMD set110A transmitting the control information on the avatar object6A to the server600or processing of the processor210of the HMD set110B transmitting the control information on the avatar object6B to the server600are omitted for the sake of simplicity of description.
In at least one embodiment of this disclosure, inFIG. 19, in at least one embodiment, the avatar object6A and the avatar object6B share the same virtual space. In other words, in at least one embodiment, the users5A and5B share one virtual space via the network2.
The virtual space11B of the user5B includes the avatar object6A and the avatar object6B. The processor210of the HMD set110B updates virtual space data representing the virtual space11B. In this description, for the sake of simplicity of description, illustration of the virtual space11A provided to the avatar object6A is omitted.
With reference toFIG. 17, in Step S1730, the processor210of the HMD set110A determines whether or not the user5A has removed the HMD120. In this determination, the processor210may determine whether or not the user5A has removed the HMD120based on inclination information on the HMD120, which is transmitted from the sensor190, or the event information transmitted from the wearing sensor195. The sensor190and the wearing sensor195function as sensors configured to detect whether or not the HMD120is worn.
With reference toFIG. 18A, a description is given of processing of determining whether or not the user5A has removed the HMD120through use of the sensor190.FIG. 18Ais a flowchart of processing of determining whether or not the user5A has removed the HMD120through use of the sensor190according to at least one embodiment of this disclosure. InFIG. 18A, the processor210receives inclination information indicating the inclination (roll angle, yaw angle, and pitch angle) of the HMD120from the sensor190(Step S1801). Next, the processor210determines whether or not the inclination of the HMD120is larger than a predetermined inclination based on the received inclination information (Step S1802). The predetermined inclination may be appropriately set by the user. In at least one embodiment, the inclination of the HMD120does not become larger than the predetermined inclination while the user5A is wearing the HMD120. In other words, the inclination of the HMD120is equal to or smaller than the predetermined inclination while the user5A is wearing the HMD120, whereas the HMD120is greatly inclined when the user5A has removed the HMD120, with the result that the inclination of the HMD120may exceed the predetermined inclination. The predetermined inclination is set from such a perspective.
For example, when the processor210determines that a pitch angle θ of the HMD120, which is an example of the inclination of the HMD120, is larger than a predetermined angle θth (YES in Step S1802), the processor210identifies removal of the HMD120by the user5A (i.e., fact that user5A is not wearing HMD120), and executes the processing of Step S1731. On the other hand, when the processor210determines that the pitch angle θ of the HMD120is equal to or smaller than the predetermined angle θth (NO in Step S1802), the processor210identifies the fact that the user5A is wearing the HMD120, and waits until reception of the inclination information on the HMD120from the sensor190.
Next, with reference toFIG. 18B, a description is given of processing of determining whether or not the user5B has removed the HMD120through use of the wearing sensor195.FIG. 18Bis a flowchart of processing of determining whether or not the user5A has removed the HMD120through use of the wearing sensor195according to at least one embodiment of this disclosure. InFIG. 18B, the processor210determines whether or not the processor210has received the event information from the wearing sensor195(Step S1803). The event information transmitted from the wearing sensor195is information indicating the fact that the state of the HMD120in the HMD set110A has changed between the worn state and the unworn state. For example, when the user5A has removed the HMD120(when state of HMD120has transitioned from worn state to unworn state), the wearing sensor195outputs the event information. When the user5A has put on the HMD120(when state of HMD120has transitioned from unworn state to worn state), the wearing sensor195outputs the event information.
When the processor210determines that the processor210has received the event information from the wearing sensor195(YES in Step S1803), the processor210identifies removal of the HMD120by the user5A (that is, fact that user5A is not wearing HMD120), and executes the processing of Step S1731. On the other hand, when the processor210determines that the processor210has not received the event information from the wearing sensor195(NO in Step S1803), the processor210identifies the fact that the user5A is wearing the HMD120, and waits until reception of the event information from the wearing sensor195.
Next, referring back toFIG. 17, when the processor210determines that the user5A has removed the HMD120(i.e., fact that user5A is not wearing HMD120) (YES in Step S1730), the processor210generates information (unworn state information) indicating the fact that the user5A is not wearing the HMD120(Step S1731). After that, the processor210transmits the unworn state information to the server600via the network2(Step S1732).
Next, after the server600receives the unworn state information from the HMD set110A, the server600transmits the unworn state information to the HMD set110B via the network2(Step S1733). After that, the processor210of the HMD set110B receives the unworn state information from the server600, and then sets the facial expression of the avatar object6A arranged in the virtual space11B to a default facial expression (namely, facial expression in default setting) (Step S1734). The default expression of the avatar object6A refers to the facial expression of the avatar object6A in an initial state before the facial expression of the avatar object6A starts to be updated based on detected facial features of user5A. When the default facial expression is a smile, inFIG. 19, the facial expression of the avatar object6A is set to a smile. The processor210updates the motion (including facial expression of avatar object6B) of the avatar object6B based on the control information on the avatar object6B.
In Step S1734, the processor210may set the facial expression of the avatar object6A to the one selected in advance by the user5A as a mode of representation of the fact that the user5A is not wearing the HMD120. In this case, information on the facial expression of the avatar object6A selected by the user5A may be transmitted from the HMD set110A to the HMD set110B, and then stored into the storage230of the HMD set110B in advance.
Next, inFIG. 19, the processor210generates a speech bubble object1942A associated with the avatar object6A in the virtual space11B. The speech bubble object1942A displays information (e.g., “user5A is absent”) indicating the fact that the user5A is not wearing the HMD120.
The processor210may generate a subtitle object2043A in the virtual space11B instead of the speech bubble object1942A (seeFIG. 20). InFIG. 20, the subtitle object2043A displays information (e.g., “user5A is absent”) indicating the fact that the user5A is not wearing the HMD120. The subtitle object2043A may be arranged near the avatar object6A, or may be moved in association with the field-of-view region15of the avatar object6B so that the subtitle object2043A is always arranged in the field-of-view region15of the avatar object6B. Further, subtitles indicating the fact that the user5A is not wearing the HMD120may be superimposed on the field-of-view image displayed on the HMD120.
After that, the processor210updates the virtual space data representing the virtual space11B, and at the same time, updates the field-of-view CV of the avatar object6B in association with the motion of the HMD120of the HMD set110B. Next, the processor210updates the field-of-view image data based on the updated virtual space data and the updated field-of-view region15of the avatar object6B, and displays a field-of-view image on the HMD120based on the updated field-of-view image data (Step S1736).
Meanwhile, the processor210of the HMD set110A transmits the unworn state information to the server600via the network2, and then transitions from an active mode to a sleep mode (Step S1737). In this case, preferably, the mode of the processor210is a sleep mode, whereas the mode of the sensor (wearing sensor195or sensor190) is an active mode.
According to at least one embodiment of this disclosure, when the user5A is not wearing the HMD120, the HMD set110B presents the user5B with the fact that the user5A is not wearing the HMD120. In particular, according to at least one embodiment of this disclosure, when the user5A is not wearing the HMD120, information indicating the fact that the user5A is not wearing the HMD120is visualized in the field-of-view image displayed on the HMD120of the HMD set110B. In this respect, the avatar object6A whose facial expression is set to the default expression and the speech bubble object1942A are visualized in the field-of-view image displayed on the HMD120of the HMD set110B.
In this manner, when the user5B communicates to/from the user5A on the virtual space11B, the user5B can visually recognize the default facial expression (e.g., smile) of the avatar object6A and the information displayed on the speech bubble object1942A, to thereby easily grasp the fact that the user5A is not wearing the HMD120. Therefore, the user is provided with a rich virtual experience.
In particular, the user5B may feel strange about the user5A who does not react at all when the user5B does not grasp the fact that the user5A is not wearing the HMD120. In this manner, the situation in which the user5B feels strange (e.g., uncanny valley) about the user5A who does not react to the user5B at all is reduced or avoided by allowing the user5B to grasp the fact that the user5A is not wearing the HMD120.
According to at least one embodiment of this disclosure, the user5A is determined to not be wearing the HMD120based on the information (inclination information or event information) transmitted from a sensor (sensor190or wearing sensor195) of the HMD set110A. After that, the unworn state information indicating the fact that the user5A is not wearing the HMD120is transmitted to the server600. In this manner, i the fact that the user5A is not wearing the HMD120is automatically identified based on the information output from the sensor of the HMD set110A.
In at least one embodiment of this disclosure, the HMD set110A transmits the unworn state information to the HMD set110B via the server600, but the HMD set110A may transmit the inclination information transmitted from the sensor190to the HMD set110B instead of the unworn state information. In this case, the processor210of the HMD set110B may identify the fact that the user5A is not wearing the HMD120based on the received inclination information, and execute the processing defined in Step S1734to Step S1736.
Next, referring mainly toFIG. 21andFIG. 22, a description is given of an information processing method according to at least one embodiment of this disclosure in a case where the user5A removed the HMD120and has put on the HMD120again.FIG. 21is a flowchart of the information processing method in a case where the user5A has put on the HMD120again according to at least one embodiment of this disclosure.FIG. 22is a diagram of the virtual space11B to be provided to the user5B, which is used to describe the information processing method according to at least one embodiment of this disclosure.
With reference toFIG. 21, in Step S2140, the processor210determines whether or not the user5A has put on the HMD120again. In this determination, the processor210may determine whether or not the user5A has put on the HMD120again based on the inclination information on the HMD120transmitted from the sensor190or the event information transmitted from the wearing sensor195. For example, when the processor210determines that the processor210has received the event information from the wearing sensor195, the processor210identifies the fact that the user5A has put on the HMD120and executes processing of Step S2141. On the other hand, when the processor210determines that the processor210has not received the event information from the wearing sensor195, the processor210identifies the fact that the user5A is not wearing the HMD120, and waits until reception of the event information from the wearing sensor195.
When the determination result of Step S2140is “YES”, the processor210transitions from the sleep mode to the active mode (Step S2141). Next, the processor210, which has transitioned to the active mode, generates control information on the avatar object6A, and generates worn state information indicating the fact that the user5A has put on the HMD120(Step S2142). After that, the processor210transmits the control information and worn state information on the avatar object6A to the server600via the network2(Step S2143).
Next, after the server600has received the control information and worn state information on the avatar object6A from the HMD set110A, the server600transmits the control information and worn state information on the avatar object6A to the HMD set110B via the network2(Step S2144). After that, the processor210of the HMD set110B updates the motion of the avatar object6A based on the control information on the avatar object6A, and also updates the motion of the avatar object6B based on the control information on the avatar object6B (Step S2145). In particular, the processor210updates the facial expression of the avatar object6A based on the face information on the avatar object6A contained in the control information on the avatar object6A, and updates the facial expression of the avatar object6B based on the face information on the avatar object6B contained in the control information on the avatar object6B.
Next, inFIG. 22, the processor210generates a speech bubble object2244A associated with the avatar object6A in the virtual space11B (Step S2146). Information indicating the fact that the user5A has put on the HMD120(e.g., “user5A has returned to seat”) is displayed on the speech bubble object2244A.
After that, the processor210updates the virtual space data on the virtual space11B, and also updates the field-of-view region15of the avatar object6B in association with the motion of the HMD120of the HMD set110B. Next, the processor210updates the field-of-view image data based on the updated virtual space data and the updated field-of-view region15of the avatar object6B, and displays the field-of-view image on the HMD120based on the updated field-of-view image data (Step S2147).
According to at least one embodiment of this disclosure, when the user5A removed the HMD120and has put on the HMD120again, the HMD set110B presents the user5B with the fact that the user5A is wearing the HMD120. In this manner, the user5B visually recognizes the updated facial expression of the avatar object6A and the speech bubble object2244A, to thereby be able to easily grasp the fact that the user5A has put on the HMD120again. Therefore, a user is provided with a rich virtual experience.
In the description of at least one embodiment of this disclosure, information indicating the fact that the user5A has put on the HMD120or is not wearing the HMD120is visualized in the field-of-view image, but at least one embodiment of this disclosure is not limited thereto. For example, the speaker180(sound output device) of the HMD set110B may output a voice guidance (e.g., “user5A has left seat”) indicating the fact that the user5A is not wearing the HMD120, or a voice guidance (e.g., “user5A has returned to seat”) indicating the fact that the user5A has put on the HMD120. In this case, the processor210of the HMD set110A transmits the voice guidance data to the HMD set110B via the server600together with the unworn state information (or worn state information).
In the description of at least one embodiment of this disclosure, the virtual space data on the virtual space11B is updated by the HMD set110B. However, the virtual space data may be updated by the server600. Further, in at least one embodiment, the field-of-view image data corresponding to the field-of-view image is updated by the HMD set110B, but the field-of-view image data may be updated by the server600. In this case, the HMD set110B displays the field-of-view image on the HMD120based on the field-of-view image data transmitted from the server600.
The order of processing steps defined in the respective steps in each ofFIG. 17andFIG. 21is just an example, and the order of those steps can be appropriately changed.
Next, with reference toFIG. 23toFIG. 30, a description is given of an information processing method according to at least one embodiment of this disclosure in a case where the user5A has removed the HMD120.FIG. 23is an exemplary flowchart of the information processing method according to at least one embodiment of this disclosure.FIG. 24,FIG. 25,FIG. 7, andFIG. 28are each a diagram of the virtual space11B to be provided to the user5B, which is used to describe the information processing method according to at least one embodiment of this disclosure.FIG. 26is a diagram of an example of display of the external device700associated with the user5A, which is used to describe the information processing method according to at least one embodiment of this disclosure.FIG. 29is a flowchart of the information processing method according to at least one embodiment of this disclosure in a case where the user5A has not put on the HMD within a certain period of time from reception of “read” information.
With reference toFIG. 23, the processor210of the HMD set110A determines whether or not the user5A has removed the HMD120(Step S2350). The processing of determining whether or not the user5A has removed the HMD120through use of the sensor190is the same as that described in the above-mentioned at least one embodiment, and thus a detailed description thereof is omitted here.
In response to a determination in Step S2350that the user5A has removed the HMD120(that is, user5A is not wearing HMD120) (YES in Step S2350), the processor210generates information (unworn state information) indicating the fact that the user5A is not wearing the HMD120(Step S2351). After that, the processor210transmits the unworn state information to the server600via the network2(Step S2352). In Step S2351, the processor210may generate positional relationship information on a positional relationship between the HMD120and the controller300, and transmit the positional relationship information to the server600together with the unworn state information.
Next, the server600receives the unworn state information from the HMD set110A, and transmits the unworn state information to the HMD set110B via the network2(Step S2353).
On the other hand, the processor210of the HMD set110A transmits the unworn state information to the server600via the network2, and then, transitions from the active mode to the sleep mode (Step S2354). In this case, in at least one embodiment, the mode of the processor210is the sleep mode, whereas the mode of the sensor (wearing sensor195or sensor190) is the active mode.
Next, the processor210of the HMD set110B sets the posture of the avatar object6A arranged in the virtual space11B to a default posture (that is, posture in default setting) based on the unworn state information received from the server600(Step S2355). In at least one embodiment, a description is given based on an assumption that the virtual space is a room in which to search for a partner of a multiplayer game, but the virtual space is not limited thereto. The default posture of the avatar object6A refers to a posture in an initial state of the avatar object6A. When the default posture is lying, inFIG. 24, the posture of the avatar object6A is set to lying on the floor.
In Step S2355, the processor210may set, as a mode of representation of the fact that the user5A is not wearing the HMD120, the posture of the avatar object6A to a posture selected in advance by the user5A. In this case, information on the posture of the avatar object6A selected by the user5A may be transmitted from the HMD set110A to the HMD set110B, and then stored into the storage230of the HMD set110B in advance.
When the user5B receives the positional relationship information indicating a positional relationship between the HMD120and the controller300, in Step S2355, the processor210may set the posture of the avatar object6A based on the positional relationship information. That is, when the user5A is not wearing the HMD120, the HMD120and the controller300are placed in a floor or table in the real space in many cases. In this case, the avatar object6A of the virtual space11B may be set to have a posture of lying on the floor based on the positional relationship between the HMD120and the controller300.
Next, the processor210of the HMD set110B sets the facial expression of the avatar object6A arranged in the virtual space11B to the default facial expression (that is, facial expression in default setting) (Step S2356). When the default facial expression is closed eyes, inFIG. 24, the facial expression of the avatar object6A is set to the closed eyes.
In Step S2356, the processor210may set, as a mode of representation of the fact that the user5A is not wearing the HMD120, the facial expression of the avatar object6A to a facial expression selected in advance by the user5A. In this case, information on the facial expression of the avatar object6A selected by the user5A may be transmitted from the HMD set110A to the HMD set110B, and then stored into the storage230of the HMD set110B in advance.
Next, the processor210updates the virtual space data on the virtual space11B, and also updates the field-of-view region15of the avatar object6B in association with the motion of the HMD120of the HMD set110B. Next, the processor210updates field-of-view image data based on the updated virtual space data and the updated field-of-view region15of the avatar object6B, and displays the field-of-view image on the HMD120based on the updated field-of-view image data (Step S2357).
In this manner, display of the posture and facial expression of the avatar object6A in the virtual space11B is set to the default (lying with closed eyes) so that the user5B can easily recognize the fact that the user5A is not wearing the HMD120and is not participating in content provided in the virtual space11B.
Next, after the processor210of the HMD set110B updates the field-of-view image in Step S2357, the processor210of the HMD set110B determines whether or not the avatar object6B associated with the user5B has performed a specific action (example of first action) on the avatar object6A associated with the user5A in the virtual space11B (Step S2358).
The specific action performed by the avatar object6B on the avatar object6A includes, for example, a motion involving a physical interaction between the avatar object6A and the avatar object6B. The physical interaction between the avatar object6A and the avatar object6B is detected by using a collider (collision object) for detecting collision with other objects. Although not shown, the collider is associated with each of the bodies of the avatar objects6A and6B, and is provided for determination of collision (determination of touch) between the avatar object6A and the avatar object6B. For example, a virtual hand2400of the avatar object6B with a collider touches the avatar object6A with a collider so that the avatar object6B and the avatar object6A have touched each other. The collider may be set over an entire range of the bodies of the avatar objects6A and6B, or may be set to a specific position (e.g., hand, leg, head, shoulder, or hip) of the bodies of the avatar objects6A and6B. In particular, in at least one embodiment, the collider is set to each fingertip of the virtual hand2400, and collision determination is performed for each finger independently. With this, determination of collision between fingers and other objects is accurately performed. The collider is formed as a transparent body, and is not displayed in the field-of-view image (FIG. 12B).
In the above-mentioned example, the processor210determines collision between the avatar object6A and the avatar object6B based on the motion of the avatar object6B (e.g., motion of virtual hand2400). However, the determination is not limited to this example. For example, the processor210may determine collision with the avatar object6A based on the motion of the user5B (e.g., motion of hand of user5B).
The specific action of a motion involving the physical interaction may include, for example, a motion of holding and shaking the body of the avatar object6A or a motion of slapping the avatar object6A (e.g., slapping face or shoulder of avatar object6A) in addition to the motion of the avatar object6B touching the avatar object6A with the virtual hand2400. The specific action may be a motion involving a reciprocal motion, namely, a motion of touching the avatar object6A in the process of the reciprocal motion of the virtual hand2400. With such a configuration, even when the avatar object6B has erroneously touched the avatar object6A due to an erroneous operation by the user5B, the fact that the specific action is not performed is detected, and erroneous detection of the motion of the virtual hand2400is prevented.
For example, inFIG. 25, when the avatar object6B is holding the body (e.g., trunk of the body) of the avatar object6A with the virtual hand2400and a reciprocal motion of the virtual hand2400is performed, the processor210determines that a shaking motion is performed as the specific action. Holding (selecting) the body of the avatar object6A with the virtual hand2400can be implemented by, for example, the user5B operating (pressing) a key (not shown) of the controller300under a state in which the collider of the virtual hand2400and the collider of the body of the avatar object6A are in contact with each other. This shaking motion may include a motion of touching the avatar object6A in the process of the reciprocal motion of the virtual hand2400at an acceleration of a fixed value or more. For example, in the case of the state inFIG. 25, a motion may be identified as the shaking motion when the virtual hand2400moves reciprocally in the z-axis direction under a state in which the virtual hand2400is holding the body of the avatar object6A and a state of an acceleration in the +z-axis direction being a fixed value or more and an acceleration in the −z-axis direction being a fixed value or more occurs continuously a predetermined number of times or more. The direction in which the virtual hand2400moves reciprocally is not limited to the z-axis direction, but may be set to any direction. The shaking motion may include a motion of the virtual hand2400passing one or more relative coordinates associated with the avatar object6A a predetermined number of times or more within a certain period of time. For example, in the case of the state inFIG. 25, one or more relative coordinates may be a first relative coordinate, which is set at a free-selected position in the +z direction, and a second relative coordinate, which is set at a free-selected position in the −z direction, with respect to a position of the body of the avatar object6A held by the virtual hand2400. In this example, a motion may be identified as the shaking motion when the virtual hand2400reciprocally moves in the z-axis direction under a state in which the virtual hand2400is holding the body of the avatar object6A, the virtual hand2400passes the first relative coordinate a predetermined number of times or more within a certain period of time, and the virtual hand2400passes the second relative coordinate a predetermined number of times or more within a certain period of time. The direction of reciprocally moving the virtual hand2400is not limited to the z-axis direction, and may be set to any direction. In this case, relative coordinates of the virtual hand2400may also be set with respect to any direction.
In at least one embodiment, the specific action is determined to be performed for a motion recognized by the user5B as shaking. That is, the shaking direction is not uniform, and is assumed to be different depending on the user. Thus, the movement direction of the virtual hand2400may be any direction, and is not required to be limited to a specific direction. When the motion of a specific action is required to be determined in a rigid manner, or the motion of the avatar object6B is required to be realistic (for example, preventing virtual hand2400from digging into avatar object6A), the processor210may determine that a shaking motion has occurred in consideration of the movement direction of the virtual hand2400, that is, in response to a reciprocal motion in a specific direction.
On the other hand, when the virtual hand2400touches (collides with) the avatar object6A at a fixed acceleration or more under a state in which the avatar object6B is not holding the body of the avatar object6A with the virtual hand2400, the processor210identifies that a slapping motion is performed as the specific action. In at least one embodiment, in order to help prevent erroneous detection of the motion of the virtual hand2400, in particular, the slapping motion includes a plurality of times of touching motions (collider collision) performed on the avatar object6A in the process of the reciprocal motion (specifically, double slapping motion) together with the reciprocal motion of the virtual hand2400at a fixed acceleration or more. For example, in the case of the state inFIG. 24, a motion may be identified as the slapping motion when the virtual hand2400reciprocally moves in the y-axis direction, and touching between the collider of the virtual hand2400and the collider of the face of the avatar object6A has occurred a predetermined number of times or more under a state in which an acceleration in the +y-axis direction is a fixed value or more and an acceleration in the −y-axis direction is a fixed value or more. The direction of reciprocally moving the virtual hand2400is not limited to the y-axis direction, and may be set to any direction. This double slapping motion may be identified, for example, when the virtual hand2400of the avatar object6B passes relative coordinates associated with the avatar object6A a predetermined number of times or more within a certain period of time, and a plurality of times of touching motions (touch determination) performed on the avatar object6A have occurred within a certain period of time. For example, in the case of the state inFIG. 24, one or more relative coordinates may be a first relative coordinate, which is set at a free-selected position in the +y direction, and a second relative coordinate, which is set at a free-selected position in the −y direction, with respect to the face of the avatar object6A. In this example, a motion may be identified as the slapping motion when the virtual hand2400reciprocally moves in the y-axis direction, and touching between the collider of the virtual hand2400and the collider of the face of the avatar object6A has occurred a predetermined number of times or more under a state in which the virtual hand2400passes the first relative coordinate a predetermined number of times or more within a certain period of time, and the virtual hand2400passes the second relative coordinate a predetermined number of times or more within a certain period of time. The direction of reciprocally moving the virtual hand2400is not limited to the y-axis direction, and may be set to any direction. In this case, relative coordinates of the virtual hand2400may also be set with respect to any direction. The processor210may determine that a slapping motion has occurred in consideration of the movement direction of the virtual hand2400, that is, in response to a reciprocal motion in a specific direction.
The avatar object6A may be set so that the avatar object6A moves in association with the motion of the avatar object6B. For example, inFIG. 25, when the avatar object6B has performed a shaking motion on the avatar object6A, the processor210may set the motion of the avatar object6A so that the avatar object6A is shaken in association with the shaking motion. However, when the avatar object6B has performed a shaking motion or slapping motion on the avatar object6A, the avatar object6A may be set so as not to follow the motion, that is, so as not to move.
Next, when the processor210determines that the avatar object6B has performed a specific action on the avatar object6A (YES in Step S2358), the processor210generates information (action execution information) indicating the fact that the user5B has performed the specific action (Step S2360). After that, the processor210transmits the action execution information to the server600via the network2(Step S2361).
Next, the server600generates information (participation request notification information) on participation request notification for requesting the user5A to participate in content provided in the virtual space11based on the action execution information received from the HMD set110B (Step S2362). The participation request notification is notification for requesting the user5A, who is not wearing the HMD120, to participate in the content provided in the virtual space11. The participation request notification information may contain, for example, the type of a specific action or the strength of the motion of a specific action in addition to identification information on the avatar object6B having performed the specific action and the identification information on the avatar object6A having received the specific action.
Next, the server600identifies personal information on the user5A based on the generated participation request notification information (Step S2363). For example, personal information (e.g., email address or social networking service (SNS) address) on the user5A is registered in advance in an information table stored in the storage630of the server600, and is managed in association with identification information on the avatar object6A. In Step S2363, the server600refers to the information table stored in the storage630to acquire the personal information on the user5A based on the identification information on the avatar object6A contained in the participation request notification information. The personal information on the user5A may be managed in a server different from the server600. Examples of the server different from the server600include a server that can distribute a game program to a user having registered personal information, namely, a server that plays a role of a platform for distribution of a game. However, the server different from the server600is not limited to those examples.
Next, the server600transmits the participation request notification information to the transmission destination (e.g., email address of user5A) acquired in Step S2363(Step S2364). A communication tool such as an email, chat, or SNS can be used as the method of transmitting the participation request notification information to the user5A.
Next, the external device700receives the participation request notification information using a communication tool, and displays a notification of receiving the participation request notification information on a display6(Step S2365). The external device700is a terminal device (user terminal) possessed by the user5A, and may be a portable terminal such as a smartphone, a personal digital assistant (PDA), a tablet computer, a phablet, or a wearable device, or may be, for example, a personal computer or a video game console.
Next, the external device700activates the communication tool based on an input operation by the user5A, who has noticed the reception notification, and displays a message M1on the display2651as illustrated inFIG. 26(Step S2366). The message M1contains information (for example, [partner player name (e.g., user5B)] is now contacting you in [content name (e.g., name of content provided to virtual space11)]. Let's activate [content] and join them!) indicating the fact that the user5B is requesting the user5A for participation in the virtual space11. When the external device700is a personal computer or a video game console, and an image of the content provided in the virtual space11is displayed also in such a device, a message about the participation request notification may be displayed on the display. The message M1may be a message that urges the user5A to put on the HMD120of the HMD set110A.
Next, the external device700determines that the user5A has read the message M1based on the fact that the message M1is displayed on the display2651, and generates “read” information indicating the fact that the message M1is read (Step S2367). The “read” information is generated based on reaction of the user5A to the message M1, and may be generated based on, for example, a reply to an email or application of “Like”. After that, the external device700transmits the “read” information to the server600via the network2(Step S2368).
Next, after the server600has received the “read” information from the external device700, the server600transmits the “read” information to the HMD set110B via the network2(Step S2369).
Next, the processor210of the HMD set110B sets the facial expression of the avatar object6A arranged in the virtual space11B to an updated facial expression based on the “read” information received from the server600(Step S2370). For example, in FIG.27, the processor210updates the facial expression of the avatar object6A, which has been in a closed-eye state (example of first state), to an opened-eye state (example of second state).
In Step S2370, the processor210may update the posture of the avatar object6A based on the “read” information received from the server600. For example, the posture of the avatar object6A in a lying state may be updated to a sitting state.
Next, the processor210updates the virtual space data representing the virtual space11B, and updates the field-of-view region15of the avatar object6B based on the motion of the HMD120of the HMD set110B. Next, the processor210updates the field-of-view image data based on the updated virtual space data and the updated field-of-view region15of the avatar object6B, and displays a field-of-view image on the HMD120based on the updated field-of-view image data (Step S2371). With this, the user5B can easily grasp the fact that the user5A has reacted to the participation request notification by visually recognizing the state in which eyes of the avatar object6A are open.
Meanwhile, in Step S2372, the server600also transmits the “read” information to the HMD set110A as well as the HMD set110B.
Next, the processor210of the HMD set110A determines whether or not the user5A has put on the HMD120within a certain period of time from reception of the “read” information (Step S2373). The processor210may determine whether or not the user5A has put on the HMD120based on the inclination information on the HMD120transmitted from the sensor190or the event information transmitted from the wearing sensor195. The determination of whether or not the user5A has put on the HMD120may be made by determining whether or not the inclination of the HMD120is smaller than a predetermined inclination based on the inclination information indicating the inclination (roll angle, yaw angle, and pitch angle) of the HMD120acquired by the sensor190. As described above, while the user5A is wearing the HMD120, the inclination of the HMD120is equal to or smaller than a predetermined inclination, and thus when the inclination of the HMD120is smaller than the predetermined inclination, the user5A is determined to be wearing the HMD120.
In response to a determination in Step S2373that the user5A has put on the HMD120(YES in Step S2373), the processor210of the HMD set110A transitions from the sleep mode to the active mode (Step S2374). Next, the processor210, which has transitioned to the active mode, generates control information on the avatar object6A and generates the worn state information indicating the fact that the user5A has put on the HMD120(Step S2375). After that, the processor210transmits the control information and worn state information on the avatar object6A to the server600via the network2(Step S2376).
Next, after the server600has received the control information and worn state information on the avatar object6A from the HMD set110A, the server600transmits the control information and worn state information on the avatar object6A to the HMD set110B via the network2(Step S2377).
Next, the processor210of the HMD set110B updates the motion of the avatar object6A based on the control information on the avatar object6A, and also updates the motion of the avatar object6B based on the control information on the avatar object6B (Step S2378). In particular, the processor210updates the posture of the avatar object6A based on the worn state information on the avatar object6A. The processor210may update the facial expression of the avatar object6B based on the face information on the avatar object6B contained in the control information on the avatar object6B. Specifically, for example, inFIG. 28, the processor210sets the posture of the avatar object6A to a standing state (example of third state).
Similarly to at least one embodiment inFIG. 22, the processor210may generate a speech bubble object associated with the avatar object6A in the virtual space11B. Information indicating the fact that the user5A has put on the HMD120(e.g., “user5A has returned to seat”) is displayed on the speech bubble object.
After that, the processor210updates the virtual space data on the virtual space11B, and also updates the field-of-view region15of the avatar object6B in association with the motion of the HMD120of the HMD set110B. Next, the processor210updates the field-of-view image data based on the updated virtual space data and the updated field-of-view region15of the avatar object6B, and displays the field-of-view image on the HMD120based on the updated field-of-view image data (Step S2379).
On the other hand, in response to a determination in Step S2373that the user5A has not put on the HMD120within the certain period of time from reception of the “read” information (NO in Step S2373), the processor210proceeds to processing ofFIG. 29. Next, inFIG. 29, the processor210generates the unworn state information indicating the fact that the user5A is not wearing the HMD120(Step S2980). After that, the processor210transmits the unworn state information to the server600via the network2(Step S2981).
Next, after the server600receives the unworn state information from the HMD set110A, the server600transmits the unworn state information to the HMD set110B via the network2(Step S2982).
Next, the processor210of the HMD set110B sets the facial expression of the avatar object6A arranged in the virtual space11B to the default facial expression based on the unworn state information received from the server600(Step S2983). For example, when the default facial expression is the closed-eye state, inFIG. 24, the facial expression of the avatar object6A is updated to the closed-eye state. In the processing of Step S2983, the facial expression of the avatar object6A may be set to the default facial expression when the control information and the worn state information on the avatar object6A are not received within a certain period of time measured by the processor210of the HMD set110B since reception of the “read” information. When the eyes of the avatar object6A are opened based on the “read” information in Step S2370ofFIG. 23, in Step S2983, the facial expression of the avatar object6A is returned to the closed-eye state. On the other hand, when the participation request notification is not read (NO in Step S2366), the avatar object6A is kept to be in the closed-eye state, and thus the facial expression of the avatar object6A is not changed in Step S2983.
After that, the processor210updates the virtual space data representing the virtual space11B, and updates the field-of-view region15of the avatar object6B based on the motion of the HMD120of the HMD set110B. Next, the processor210updates the field-of-view image data based on the updated virtual space data and the updated field-of-view region15of the avatar object6B, and displays the field-of-view image on the HMD120based on the updated field-of-view image data (Step S2984).
As described above, the information processing method according to at least one embodiment includes moving the virtual hand2400in the virtual space11based on the motion of the user5B. The method further includes performing an action (first action), for example, shaking motion performed by the virtual hand2400, on the avatar object6A, whose motion can be controlled in the virtual space11based on the motion of the user5A. The method further includes performing transmission (second action) of the participation request notification for requesting the user5A (external device700associated therewith) to participate in the virtual space11based on the execution of the action. With this method, the plurality of users5A and5B sharing the virtual space11can smoothly communicate to/from each other in the virtual space11without impairing the sense of immersion of those users by performing predetermined actions through an intuitive operation that uses the virtual hand2400. In this manner, the virtual experience of the user is improved by providing a user interface (so-called diegetic UI) that does not require a menu operation or a key operation for communication between users.
In particular, the user5A can easily recognize the fact that the user5B has requested the user5A to participate in content provided in the virtual space11through transmission of the participation request notification to the external device700associated with the user5A based on the specific action performed by the user5B on the avatar object6A. With this, usage of content (e.g., multiplayer game) that is to be played by a plurality of users in the virtual space11is encouraged.
In the information processing method according to at least one embodiment, the participation request notification is transmitted to the user5A when the user5B has performed a specific action on the avatar object6A under a state in which the user5A is not controlling the avatar object6A (namely, under state in which user5A is not wearing HMD120). Thus, the user5A, who is not wearing the HMD120, is induced to play content provided in the virtual space11, and encouraged to participate in the content.
In particular, in a VR game, when there are few users (avatars) waiting in a room, that is, when there are few candidates for partners of a multiplayer game, the user (avatar) enters a room for the moment, and waits until candidates for partners of the multiplayer game appear without wearing the HMD in many cases. Therefore, awaiting user is induced to play the multiplayer VR game.
The information processing method according to at least one embodiment may further include transitioning the avatar object6A in the virtual space11B from the closed-eye state (first state) to the opened-eye state (second state) when the user5A has reacted to the participation request notification (for example, when message M1is read), and transitioning the avatar object6A in the virtual space11B to a standing state (third state) when the user5A has started to control the avatar object6A within a certain period of time after the avatar object6A transitioned to the opened-eye state. With this, the user5B can easily grasp the fact that the user5A has reacted or the user5A has put on the HMD120to control the avatar object6A.
On the other hand, when the user5A has not entered the state of controlling the avatar object6A within the certain period of time after the avatar object6A transitioned to the opened-eye state, the avatar object6A in the virtual space11B may be returned to the closed-eye state. With this, the user5B can easily grasp the fact that the user5A has not entered the state of controlling the avatar object6A (that is, fact that user5A has not participated in content) even though the user5B has transmitted the participation request notification. Therefore, the user5B can shake the avatar object6A again to request participation in the virtual space again, or can request avatars that are associated with users other than the user5A who share the virtual space11to participate in the virtual space.
The state of the user5A not controlling the avatar object6A may include not only a case of the user5A not wearing the HMD120but also a case of the user5A not being logged in to the content provided in the virtual space11in the first place. In the case of the user5A not being logged in to the content, the avatar object6A is usually not present in the virtual space11B provided by the HMD set110B of the user5B. However, in a case where the user5B registers the user5A as a “friend”, the user5B can cause the avatar object6A of the user5A to be present in the virtual space11B even when the user5A is not logged in to the content. Registration as a friend may be registered in, for example, the server600together with personal information on each user. In this case, the processor210of the HMD set110B can set the posture and facial expression of the avatar object6A arranged in the virtual space11B to the default based on the fact that the user5A is not logged in to the content (Step S2354ofFIG. 23). In this manner, the avatar object6A associated with the user5A, which is in the state of not being logged in to the content, is caused to be present in the virtual space11B, and the participation request notification is transmitted to the user5A based on execution of a specific action on the avatar object6A by the avatar object6B, to thereby be able to motivate the user5A to log in to the content.
Details of the participation request notification (message M1) may be different depending on the strength of the specific action on the avatar object6A by the avatar object6B. For example, a plurality of threshold values may be provided in a stepwise manner for the acceleration of a motion of the virtual hand2400, and details of the message M1may be changed depending on the threshold value of the acceleration. For example, the message M1may contain information on a participation request level, and the information on a participation request level may be changed depending on the strength of the shaking motion. Specifically, when the shaking motion is intense, details (e.g., “participation request level: three stars”) indicating the fact that the participation request level exhibited by the user5B is high may be displayed as the message M1. A predetermined threshold value may be provided for the number of times that the virtual hand2400passes relative coordinates associated with the avatar object6A, and the strength of the motion of a specific action may be identified based on the threshold value for the number of times of passage.
Details of the message M1may be changed when a plurality of avatars have performed a specific action on the avatar object6A at the same time (or in cooperation with one another). In at least one embodiment, a dedicated collider is set at each specific position of the body of the avatar6A so that a plurality of avatars can hold the avatar A at the same time. With this, the plurality of avatars can perform motions of shaking the avatar object6A. In this case, as the number of users performing a specific action on the avatar object6A becomes larger, the participation request level preferably becomes higher.
Details of the message M1may be different depending on the type of a specific action performed on the avatar object6A. For example, when the avatar object6B slaps the avatar object6A, the participation request level displayed in the message M1may be set higher compared to a case of the avatar object6B shaking the avatar object6A.
In this manner, details of the participation request notification are set different depending on the intensity of the motion of a specific action on the avatar object6A by the avatar object6B or the type of the specific action, to thereby allow the user5A to grasp whether the participation request level for the content exhibited by the user5B is high or low. Therefore, the user5A can determine whether or not to participate in the content depending on the degree of the participation request level.
In at least one embodiment of this disclosure, an action involving a physical interaction is exemplified as the specific action on the avatar object6A by the avatar object6B. However, the specific action is not limited thereto. The specific action may be an action for identifying the user5A, and for example, the specific action performed by the avatar object6B on the avatar object6A may be identified through, for example, the line of sight, a voice, a laser pointer, a facial expression (e.g., smile or glare), or clapping. When the user5B desires some avatar (user) to participate in the content, the user5B may select the avatar (user) from an avatar list (user list) displayed in the virtual space11B. That is, the specific action performed by the avatar object6B may be a direct action or an indirect action on the avatar object6A. The direct action and the indirect action may be combined with each other.
The timing of transmitting a participation request notification may be set different depending on details of the content provided in the virtual space11. For example, when the number of users required to start a multiplayer game is three or more, and the processor210of the HMD set110B determines that the user5B has performed a specific action on the last avatar among a plurality of avatars required to start the multiplayer game, the processor210of the HMD set110B may transmit a participation request notification at the same time to each user associated with one of the plurality of avatars on which the specific action has been performed. Even when the number of users required to start the multiplayer game is three or more, the processor210of the HMD set110B may transmit a participation request notification individually to each avatar at a timing of the avatar object6B having performed a specific action on the avatar.
In the above-mentioned at least one embodiment of this disclosure, the facial expression of the avatar object6A, who has closed his or her eyes, is updated to an opened-eye state based on reaction of the user5A to a participation request notification (e.g., in response to message M1for participation request notification being read). However, the manner of update is not limited thereto. For example, the eyes of the avatar object6A may be updated to the opened-eye state based on the avatar object6B having performed a specific action on the avatar object6A. With this, the user5B can easily grasp the fact that the specific action has appropriately been performed on the avatar object6A. The eyes of the avatar object6A may be updated to the opened-eye state based on the user5A having worn the HMD120.
The information processing method according to at least one embodiment includes performing a predetermined action (example of first action) involving a physical interaction on the user5A by the user5B in the virtual space11. The method further includes issuing participation request notification (example of second action) to the user5A based on execution of the action. The method further includes executing update (example of third action) of the posture and/or facial expression of the avatar object6A in the virtual space11based on reaction of the user5A to the participation request notification (e.g., in response to message M1for participation request notification being read). With this method, the users5A and5B can smoothly communicate to/from each other in the virtual space11by updating the posture and facial expression of the avatar object6A associated with the user5A in the virtual space11B based on reaction of the user5A to the participation request notification transmitted to the user5A. In this manner, a seamless virtual experience is provided so as to obscure the boundary between the real space and the virtual space by translating, in the virtual space11, reaction of the user5A to the participation request notification in the real space.
Next, with reference toFIG. 30toFIG. 33, a description is given of an information processing method according to at least one embodiment of this disclosure. In at least one embodiment, there is exemplified an information processing method to be executed at the time of starting a game in which only the user5B plays the game in the virtual space11(11B) (so-called single player game).FIG. 30is a flowchart of an information processing method according to at least one embodiment of this disclosure.FIG. 31toFIG. 33are diagrams of the virtual space11B to be provided to the user5B, which is used to describe the information processing method according to at least one embodiment of this disclosure. InFIG. 31, in at least one embodiment, the virtual space11B includes the avatar object6B and an avatar object6C. The avatar object6C is controllable by the server600(so-called non-player character (NPC)).
With reference toFIG. 30, first, the processor210of the HMD set110B receives input for starting a game based on an operation by the user5B (Step S3000). Next, the processor210transmits game start input information to the server600via the network2(Step S3001).
Next, the server600reads opening movie data from the storage630based on the game start input information received from the processor210(Step S3002). The opening movie is a video image to be reproduced in the virtual space11B as a scenario scene (example of first scene) introduced at the time of starting the game. Next, the server600transmits the opening movie data to the processor210of the HMD set110B via the network2(Step S3003).
Next, the processor210of the HMD set110B reproduces the opening movie in the virtual space11B based on the opening movie data received from the server600(Step S3004).
During or after reproduction of the opening movie, the processor210of the HMD set110B sets the posture and facial expression of the avatar object6C to the default posture and facial expression (Step S3005). When the default posture is a lying state, and the default facial expression is a closed-eye state, inFIG. 31, the posture of the avatar object6C is set to the lying state, and the facial expression of the avatar object6C is set to the closed-eye state.
Next, the processor210updates the virtual space data on the virtual space11B, and also updates the field-of-view region15of the avatar object6B in association with the motion of the HMD120of the HMD set110B. Next, the processor210updates field-of-view image data based on the updated virtual space data and the updated field-of-view region15of the avatar object6B, and displays the field-of-view image on the HMD120based on the updated field-of-view image data (Step S3006).
Next, after the field-of-view image is updated in Step S3006, the processor210determines whether or not the avatar object6B associated with the user5B has performed a specific action (example of first action) on the avatar object6C associated with the server600(Step S3007). The processing of determining whether or not the specific action has been performed is the same as that of the first modification example, and thus a detailed description thereof is omitted here. After the reproduction of the opening movie in Step S3004, the processor210may display a message “Please wake the avatar object6C up” in the virtual space11B in order to induce the avatar object6B to perform the specific action on the avatar object6C.
Next, when the processor210determines that the avatar object6B has performed a specific action on the avatar object6C (YES in Step S3007), the processor210generates action execution information indicating the fact that the user5B has performed the specific action (Step S3008). After that, the processor210transmits the action execution information to the server600via the network2(Step S3009).
Next, the server600generates scene transition information based on the action execution information received from the HMD set110B (Step S3010). The scene transition information is information for transitioning the game provided in the virtual space11B from a scenario scene to a game scene (example of second scene). Next, the server600transmits the scene transition information to the processor210of the HMD set110B via the network2(Step S3011).
Next, the processor210of the HMD set110B updates the posture and facial expression of the avatar object6C based on the scene transition information received from the server600(Step S3012). Specifically, for example, inFIG. 33, the posture of the avatar object6C is set to a standing state, and the facial expression of the avatar object6C is set to an opened-eye state.
Next, the processor210generates a speech bubble object3345A associated with the avatar object6C in the virtual space11B (Step S3013). The speech bubble object3345A displays information indicating the fact that the avatar object6C has joined the party of the avatar object6B (e.g., “Avatar object6C has joined party”).
After that, the processor210updates the virtual space data on the virtual space11B, and also updates the field-of-view region15of the avatar object6B in association with the motion of the HMD120of the HMD set110B. Next, the processor210updates field-of-view image data based on the updated virtual space data and the updated field-of-view region15of the avatar object6B, and displays the field-of-view image on the HMD120based on the updated field-of-view image data (Step S3014).
After that, the processor210starts to display a game scene based on the scene transition information (Step S3015).
As described above, the information processing method according to at least one embodiment includes performing a specific action, for example, a shaking motion, with the virtual hand2400on the avatar object6C, whose motion in the virtual space11B can be controlled by a computer. The method further includes transitioning a game provided in the virtual space11B from a scenario scene (first scene) to a game scene (second scene) based on the execution of the specific action. Hitherto, a user interface (e.g., “Press any button”) for progressing the game after reproduction of the scenario scene has been displayed in order to start the game scene, and the user has pressed a predetermined button on the external controller300to start the game scene. In contrast, with the method according to at least one embodiment, the game scene is started based on the execution of the specific action on the avatar object6C by the user5B, and thus the user5B can progress the game without impairing the sense of immersion into the virtual space11B. In at least one embodiment, the description has been given by taking an exemplary case in which a system including the server600and the HMD set110B executes the processing of at least one embodiment. However, the execution of the processing is not limited thereto, and the HMD set110B may execute the processing automatically.
The information processing method for progressing the game according to at least one embodiment is not limited to the scene of starting a game, but is applicable to a scene of restarting the game in the middle of the game. For example, when a game to be played by a party organized by a plurality of avatars (e.g., avatar object6B and avatar object6C) is provided to the virtual space11B, and a specific action has been determined to be performed on the avatar object6C by the avatar object6B after interruption of the game progress, the game may be restarted. Specifically, in a scene in which the party stays at an inn, in at least one embodiment, the avatar object6C is still sleeping even in the morning. Under this state, the avatar object6B may perform a specific action such as a shaking motion or a slapping motion on the avatar object6C, to thereby cause the avatar object6C to wake up and restart progress of the game. With such a method, progress of the game is restarted without impairing the sense of immersion of the user5B into the virtual space11B.
The order of processing steps defined in the respective steps in each ofFIG. 23,FIG. 29, andFIG. 30is just an example, and the order of those steps can appropriately be changed.
In the descriptions of the above-mentioned at least one embodiment and modification examples, whether or not the HMD120is worn is detected based on inclination information on the HMD120transmitted from the sensor190or event information transmitted from the wearing sensor195. However, the manner of detection is not limited thereto. For example, the processor210of the user terminal may detect the motion of the HMD120with the HMD sensor410and/or the sensor190, and identify the fact that the user has put on the HMD120or removed the HMD120when the motion has formed a predetermined locus. The processor210may detect whether or not the HMD120has moved with the HMD sensor410and/or the sensor190, and identify the fact that the user is not wearing the HMD120when the position of the HMD120has not changed within a certain period of time. Further, the processor210may detect whether or not the HMD120is wearing the HMD120by detecting a relative positional relationship between the HMD120and the external controller300. For example, when the HMD120and the external controller300are away from each other by a fixed distance or more, the processor210may determine that the user is not wearing the HMD120.
In the description of the above-mentioned at least one embodiment and modification examples, it is assumed that the virtual space data representing the virtual space11B is updated by the HMD set110B, but the virtual space data may be updated by the server600. Further, in at least one embodiment, the field-of-view image data corresponding to the field-of-view image is updated by the HMD set110B, but the field-of-view image data may be updated by the server600. In this case, the HMD set110B displays the field-of-view image on the HMD120based on the field-of-view image data transmitted from the server600.
The order of processing steps defined in the respective steps in each ofFIG. 17andFIG. 21is just an example, and the order of those steps can appropriately be changed.
In at least one embodiment of this disclosure, a description has been given by taking the virtual space (VR space) into which the user is immersed by an HMD device as an example. However, a see-through HMD device may be adopted as the HMD device. In this case, a virtual experience in an augumented reality (AR) space or a mixed reality (MR) space may be provided to the user through output of a field-of-view image that is obtained by combining a part of an image forming the virtual space with the real space visually recognized by the user via the see-through HMD device. In this case, an action may be performed on a target object in the virtual space based on the motion of a hand of the user instead of the virtual hand. Specifically, the processor may identify coordinate information on the position of the hand of the user in the real space, and define the position of the target object in the virtual space in association with coordinate information on the target object in the real space. With this, the processor can grasp the positional relationship between the hand of the user in the real space and the target object in the virtual space2, and execute processing corresponding to, for example, the above-mentioned control of collision between the hand of the user and the target object. As a result, performing an action on the target object based on the motion of the hand of the user is possible.
In order to implement various types of processing to be executed by the processor210of the HMD set110with use of software, a control program for executing various types of processing on a computer (processor) may be installed in advance into the storage230or the memory220. Alternatively, the control program may be stored in a computer-readable storage medium, for example, a magnetic disk (HDD or floppy disk), an optical disc (e.g., CD-ROM, DVD-ROM, or Blu-ray (trademark) disc), a magneto-optical disk (e.g., MO), and a flash memory (e.g., SD card, USB memory, or SSD). In this case, the storage medium is connected to the computer200, and thus the control program stored in the storage medium is installed into the storage230. After that, the control program installed in the storage230is loaded onto the RAM, and the processor executes the loaded program. In this manner, the processor210executes the various types of processing.
The control program may be downloaded from a computer on the communication network2via the communication interface250. Also in this case, the downloaded program is similarly installed into the storage230.
This concludes description of at least one embodiment of this disclosure. However, the description of at least one embodiment of this disclosure is not to be read as a restrictive interpretation of the technical scope of this disclosure. At least one embodiment of this disclosure is merely given as an example, and is to be understood by a person skilled in the art that various modifications can be made to at least one embodiment of this disclosure within the scope of this disclosure set forth in the appended claims. Thus, the technical scope of this disclosure is to be defined based on the scope of this disclosure set forth in the appended claims and an equivalent scope thereof.
Claims
- A method, comprising: defining a virtual space, the virtual space comprising a first avatar object and a second avatar object, the first avatar object being associated with a first user terminal, the first user terminal comprising a first head-mounted device (HMD) associated with a first user, the second avatar object being associated with a second user terminal, the second user terminal comprising a second HMD associated with a second user;defining a visual field in the virtual space in association with a motion of the second HMD;generating a visual-field image that corresponds to the visual field;displaying the visual-field image on the second HMD;receiving first information indicating that the first user is not wearing the first HMD;changing the visual-field image on the second HMD in response to the first information being received, wherein the changing the visual-field image comprises changing a posture of the first avatar object from a first posture prior to receiving the first information to a second posture after receiving the first information;receiving instructions from the second HMD for the second avatar object interacting with the first avatar object when the first avatar object has the changed posture;and sending a notification to the first user outside of the virtual space in response to the second avatar object interacting with the first avatar object when the first avatar object has the changed posture.
- The method according to claim 1 , further comprising displaying information indicating that the first user is not wearing the first HMD in the visual-field image in response to the first information being received.
- The method according to claim 1 , further comprising: displaying a face of the first avatar object in a first mode;receiving second information for identifying a real facial expression of the first user;and updating the face of the first avatar object to a second mode in response to the second information being received.
- The method according to claim 3 , wherein the first mode comprises a facial expression that is determined in advance irrespective of the real facial expression of the first user.
- The method according to claim 3 , further comprising: displaying the face of the first avatar object in the first mode in response to the first information being received;and displaying the face of the first avatar object in the second mode in response to failing to receive the first information.
- The method according to claim 1 , wherein the first user terminal comprises a wearing sensor, the wearing sensor being configured to detect whether the first user is wearing the first HMD, and wherein the method further comprises: outputting, by the first user terminal, the first information in response to detection by the wearing sensor that the first user is not wearing the first HMD;transmitting, by the first user terminal, the first information to the second user terminal;and receiving, by the second user terminal, the first information transmitted from the first user terminal.
- The method according to claim 1 , further comprising: receiving third information indicating that the first user has put on the first HMD again after the first user removed the first HMD;and updating the visual-field image on the second HMD in response to the third information being received.
- The method according to claim 1 , wherein the changing of the posture of the first avatar object comprises changing the first avatar object from a standing posture to a laying down posture.
Disclaimer: Data collected from the USPTO and may be malformed, incomplete, and/or otherwise inaccurate.